Compare commits

...

22 commits

Author SHA1 Message Date
Jacob Anders
f576b4d60f Handle MissingAttributeError when using OOB inspections to fetch MACs
Currently, if an attempt is made to fetch MAC address information using
OOB inspection on a Redfish-managed node and EthernetInterfaces
attribute is missing on the node, inspection fails due to a
MissingAttributeError exception being raised by sushy. This change adds
catching and handling this exception.

Change-Id: I6f16da05e19c7efc966128fdf79f13546f51b5a6
(cherry picked from commit f10958a542)
2023-05-09 10:55:17 +00:00
Dmitry Tantsur
7ea86a8669 Do not move nodes to CLEAN FAILED with empty last_error
When cleaning fails, we power off the node, unless it has been running
a clean step already. This happens when aborting cleaning or on a boot
failure. This change makes sure that the power action does not wipe
the last_error field, resulting in a node with provision_state=CLEANFAIL
and last_error=None for several seconds. I've hit this in Metal3.

Also when aborting cleaning, make sure last_error is set during
the transition to CLEANFAIL, not when the clean up thread starts
running.

While here, make sure to log the current step in all cases, not only
when aborting a non-abortable step.

Change-Id: Id21dd7eb44dad149661ebe2d75a9b030aa70526f
Story: #2010603
Task: #47476
(cherry picked from commit 9a0fa631ca)
2023-03-02 18:47:09 +00:00
Zuul
9dad023db8 Merge "Align iRMC driver with Ironic's default boot_mode" into bugfix/21.0 2023-02-20 15:37:09 +00:00
Zuul
1b106d0a29 Merge "Fix selinux context of published image hardlink" into bugfix/21.0 2023-02-01 17:47:20 +00:00
Zuul
f7104e63c3 Merge "Remove reno job" into bugfix/21.0 2023-01-31 19:35:46 +00:00
Jay Faulkner
c8a035411c Move and fix reno config for releasenotes job
In its current place, reno config changes will not cause
build-openstack-releasenotes job to run, which means changes can land to
that config without being tested. Yikes!

Also fixes error in regexp which was preventing this from actually
fixing the build-openstack-releasenotes job.

Co-Authored-By: Adam McArthur <adam@mcaq.me>
Change-Id: I4d46ba06ada1afb5fd1c63db5850a1983e502a6c
(cherry picked from commit fbe22b23bc)
2023-01-27 18:38:12 +00:00
Riccardo Pittau
cfbb733601 Remove reno job
It does not make sense to run it on bugfix branches

Change-Id: Iace2ff14b1443b0bb6d96d942eff91c5f583a6c1
2023-01-27 11:25:42 +01:00
Riccardo Pittau
0c5695c76f Pin tox to version lower than 4
This branch is based on zed where tox version is lower than 4

Change-Id: Ic4db815d08d286f210863a1b979e4a9fd5fa82bb
2023-01-19 14:55:26 +01:00
Riccardo Pittau
bdb5cd0cba Fix selinux context of published image hardlink
If the published image is a hardlink, the source selinux context is
preserved. This could cause access denied when retrieving the image
using its URL.

Change-Id: I550dac9d055ec30ec11530f18a675cf9e16063b5
(cherry picked from commit c05c09fd3a)
2023-01-19 09:25:48 +00:00
Jay Faulkner
74da8b0819 Fixes for tox 4.0
Formatting changes in config file required for tox 4.0.

Change-Id: I84202ac10e9195647162f0b5737ebb610ef1ef93
2022-12-15 18:35:06 +00:00
Vanou Ishii
29546c18dc Align iRMC driver with Ironic's default boot_mode
This commit modifies iRMC driver to use ironic.conf [deploy]
default_boot_mode as default value of boot_mode.
Before this commit, iRMC driver assumes Legacy BIOS as default
boot_mode and value of default_boot_mode doesn't have any effect
on iRMC driver's behavior.

Story: 2010381
Task: 46643
Change-Id: Ic5a235785a1a2bb37fef38bd3a86f40125acb3d9
(cherry picked from commit 071cf9b2dd)
2022-12-08 11:45:07 +13:00
Zuul
fe64d5e1ec Merge "Add support auth protocols for iRMC" into bugfix/21.0 2022-11-14 23:31:25 +00:00
Dmitry Tantsur
4de58fa807 Fix the invalid glance client test
It relied on mocking tenacity.retry, but it's executed on class
initialization. Depending on the ordering, it may do nothing or
it may replace ImageService.call with a mock.

Instead, add a new tenacity helper that loads an option in runtime.
As a nice side effect, [glance]num_retries is now mutable.

Change-Id: I2e02231d294997e824db77c998ef8d352fa69075
(cherry picked from commit cab51a9fcc)
2022-11-09 12:26:05 +00:00
Shukun Song
66b91b1a14 Add support auth protocols for iRMC
This patch adds new snmpv3 auth protocols to irmc which are supported
from irmc s6.

Change-id: Id2fca59bebb0745e6b16caaaa7838d1f1a2717e1
Story: 2010309
Task: 46353
(cherry picked from commit 233c640838)
2022-10-24 10:56:33 +09:00
Riccardo Pittau
9a06744458 Pin bugfix 21.0 ci jobs to zed
Change-Id: I0d362c163ac0f5e53cfdfa58c051dfde00d4d344
2022-09-26 11:40:13 +02:00
Julia Kreger
d8590454b1 Correct Image properties lookup for paths
The image lookup process, when handed a path attempts
to issue a HEAD request against the path and gets a
response which is devoid of details like a content length
or any properties. This is expected behavior, however if
we have a path, we also know we don't need to explicitly
attempt to make an HTTP HEAD request in an attempt to
match the glance ``kernel_id`` -> ``kernel`` and similar
value population behavior.

Also removes an invalid test which was written before the
overall method was fully understood.

And fixes the default fallback for kickstart template
configuration, so that it uses a URL instead of a
direct file path.

And fix logic in the handling of image property result
set, where the code previously assumed a ``stage2``
ramdisk was always required, and based other cleanup
upon that.

Change-Id: I589e9586d1279604a743746952aeabbc483825df
(cherry picked from commit 4d653ac225)
2022-09-20 02:26:37 +00:00
Zuul
816482e75e Merge "Redfish: Consider password part of the session cache" into bugfix/21.0 2022-09-16 21:55:32 +00:00
Stephen Finucane
3c4f80be77 Fix compatibility with oslo.db 12.1.0
oslo.db 12.1.0 has changed the default value for the 'autocommit'
parameter of 'LegacyEngineFacade' from 'True' to 'False'. This is a
necessary step to ensure compatibility with SQLAlchemy 2.0. However, we
are currently relying on the autocommit behavior and need changes to
explicitly manage sessions. Until that happens, we need to override the
default.

Change-Id: I9e095d810ff5398920e8ffd4f2f089d9b8d29335
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
(cherry picked from commit 74795abf2f)
2022-09-08 16:47:40 +00:00
Julia Kreger
f8c203c021 Redfish: Consider password part of the session cache
Previously, when a password change occured in ironic,
the session would not be invalidated, and this, in theory,
could lead to all sorts of issues with the old password
still being re-used for authentication.

In a large environment where credentials for BMCs may not
be centralized, this can quickly lead to repeated account
lockout experiences for the BMC service account.

Anyhow, now we consider it in tracking the sessions, so
when the saved password is changed, a new session is
established, and the old session is eventually expired out
of the cache.

Change-Id: I49e1907b89a9096aa043424b205e7bd390ed1a2f
(cherry picked from commit c2ba869040)
2022-09-06 21:49:39 +00:00
Dmitry Tantsur
46fea705ad Improve error message heuristics with jsonschema>=4.8
Before version 4.8, jsonschema did some wild guessing when producing
error messages for schemas with several equivalent subschemas. In
version 4.8 it is no longer done, causing error messages that are more
correct but also more generic.

This change restores guessing the potential root cause without claiming
that it's the only possible root cause. Also the traits schema is
simplified to make it less ambiguous.

See https://github.com/python-jsonschema/jsonschema/issues/991 for details.

Change-Id: Ia75cecd2bfbc602b8b2b85bdda20fdc04c5eadf4
(cherry picked from commit 62f9c61ae6)
2022-08-30 15:31:24 +00:00
Dmitry Tantsur
e0745d4f65 redfish: fixes usage of ValueDisplayName
It's spelled this way, not DisplayValueName.

Change-Id: I170d78bdb7ed0f6c36a80a9f2ceb9629f44394ed
(cherry picked from commit 9f1f58c6af)
2022-08-26 12:45:33 +00:00
OpenStack Release Bot
9a81eb268d Update .gitreview for bugfix/21.0
Change-Id: Iea0b33e84c17e357d3ddf1b383979d3454f92b11
2022-08-18 10:52:08 +00:00
48 changed files with 639 additions and 233 deletions

View file

@ -2,3 +2,4 @@
host=review.opendev.org
port=29418
project=openstack/ironic.git
defaultbranch=bugfix/21.0

View file

@ -123,11 +123,6 @@ Configuration via ``driver_info``
the iRMC with administrator privileges.
- ``driver_info/irmc_password`` property to be ``password`` for
irmc_username.
- ``properties/capabilities`` property to be ``boot_mode:uefi`` if
UEFI boot is required.
- ``properties/capabilities`` property to be ``secure_boot:true`` if
UEFI Secure Boot is required. Please refer to `UEFI Secure Boot Support`_
for more information.
* If ``port`` in ``[irmc]`` section of ``/etc/ironic/ironic.conf`` or
``driver_info/irmc_port`` is set to 443, ``driver_info/irmc_verify_ca``
@ -191,6 +186,22 @@ Configuration via ``driver_info``
- ``driver_info/irmc_snmp_priv_password`` property to be the privacy protocol
pass phrase. The length of pass phrase should be at least 8 characters.
Configuration via ``properties``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Each node is configured for ``irmc`` hardware type by setting the following
ironic node object's properties:
- ``properties/capabilities`` property to be ``boot_mode:uefi`` if
UEFI boot is required, or ``boot_mode:bios`` if Legacy BIOS is required.
If this is not set, ``default_boot_mode`` at ``[default]`` section in
``ironic.conf`` will be used.
- ``properties/capabilities`` property to be ``secure_boot:true`` if
UEFI Secure Boot is required. Please refer to `UEFI Secure Boot Support`_
for more information.
Configuration via ``ironic.conf``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -229,9 +240,10 @@ Configuration via ``ironic.conf``
and ``v2c``. The default value is ``public``. Optional.
- ``snmp_security``: SNMP security name required for version ``v3``.
Optional.
- ``snmp_auth_proto``: The SNMPv3 auth protocol. The valid value and the
default value are both ``sha``. We will add more supported valid values
in the future. Optional.
- ``snmp_auth_proto``: The SNMPv3 auth protocol. If using iRMC S4 or S5, the
valid value of this option is only ``sha``. If using iRMC S6, the valid
values are ``sha256``, ``sha384`` and ``sha512``. The default value is
``sha``. Optional.
- ``snmp_priv_proto``: The SNMPv3 privacy protocol. The valid value and
the default value are both ``aes``. We will add more supported valid values
in the future. Optional.

View file

@ -87,8 +87,18 @@ field:
The "auto" mode first tries "session" and falls back
to "basic" if session authentication is not supported
by the Redfish BMC. Default is set in ironic config
as ``[redfish]auth_type``.
as ``[redfish]auth_type``. Most operators should not
need to leverage this setting. Session based
authentication should generally be used in most
cases as it prevents re-authentication every time
a background task checks in with the BMC.
.. note::
The ``redfish_address``, ``redfish_username``, ``redfish_password``,
and ``redfish_verify_ca`` fields, if changed, will trigger a new session
to be establsihed and cached with the BMC. The ``redfish_auth_type`` field
will only be used for the creation of a new cached session, or should
one be rejected by the BMC.
The ``baremetal node create`` command can be used to enroll
a node with the ``redfish`` driver. For example:
@ -620,6 +630,44 @@ Eject Virtual Media
"boot_device (optional)", "body", "string", "Type of the device to eject (all devices by default)"
Internal Session Cache
======================
The ``redfish`` hardware type, and derived interfaces, utilizes a built-in
session cache which prevents Ironic from re-authenticating every time
Ironic attempts to connect to the BMC for any reason.
This consists of cached connectors objects which are used and tracked by
a unique consideration of ``redfish_username``, ``redfish_password``,
``redfish_verify_ca``, and finally ``redfish_address``. Changing any one
of those values will trigger a new session to be created.
The ``redfish_system_id`` value is explicitly not considered as Redfish
has a model of use of one BMC to many systems, which is also a model
Ironic supports.
The session cache default size is ``1000`` sessions per conductor.
If you are operating a deployment with a larger number of Redfish
BMCs, it is advised that you do appropriately tune that number.
This can be tuned via the API service configuration file,
``[redfish]connection_cache_size``.
Session Cache Expiration
~~~~~~~~~~~~~~~~~~~~~~~~
By default, sessions remain cached for as long as possible in
memory, as long as they have not experienced an authentication,
connection, or other unexplained error.
Under normal circumstances, the sessions will only be rolled out
of the cache in order of oldest first when the cache becomes full.
There is no time based expiration to entries in the session cache.
Of course, the cache is only in memory, and restarting the
``ironic-conductor`` will also cause the cache to be rebuilt
from scratch. If this is due to any persistent connectivity issue,
this may be sign of an unexpected condition, and please consider
contacting the Ironic developer community for assistance.
.. _Redfish: http://redfish.dmtf.org/
.. _Sushy: https://opendev.org/openstack/sushy
.. _TLS: https://en.wikipedia.org/wiki/Transport_Layer_Security

View file

@ -86,11 +86,13 @@ STANDARD_TRAITS = os_traits.get_traits()
CUSTOM_TRAIT_PATTERN = "^%s[A-Z0-9_]+$" % os_traits.CUSTOM_NAMESPACE
CUSTOM_TRAIT_REGEX = re.compile(CUSTOM_TRAIT_PATTERN)
TRAITS_SCHEMA = {'anyOf': [
{'type': 'string', 'minLength': 1, 'maxLength': 255,
'pattern': CUSTOM_TRAIT_PATTERN},
{'type': 'string', 'enum': STANDARD_TRAITS},
]}
TRAITS_SCHEMA = {
'type': 'string', 'minLength': 1, 'maxLength': 255,
'anyOf': [
{'pattern': CUSTOM_TRAIT_PATTERN},
{'enum': STANDARD_TRAITS},
]
}
LOCAL_LINK_BASE_SCHEMA = {
'type': 'object',

View file

@ -211,12 +211,17 @@ def _validate_schema(name, value, schema):
try:
jsonschema.validate(value, schema)
except jsonschema.exceptions.ValidationError as e:
# The error message includes the whole schema which can be very
# large and unhelpful, so truncate it to be brief and useful
error_msg = ' '.join(str(e).split("\n")[:3])[:-1]
raise exception.InvalidParameterValue(
_('Schema error for %s: %s') % (name, error_msg))
error_msg = _('Schema error for %s: %s') % (name, e.message)
# Sometimes the root message is too generic, try to find a possible
# root cause:
cause = None
current = e
while current.context:
current = jsonschema.exceptions.best_match(current.context)
cause = current.message
if cause is not None:
error_msg += _('. Possible root cause: %s') % cause
raise exception.InvalidParameterValue(error_msg)
return value

View file

@ -33,6 +33,7 @@ from ironic.common.glance_service import service_utils
from ironic.common.i18n import _
from ironic.common import keystone
from ironic.common import swift
from ironic.common import utils
from ironic.conf import CONF
TempUrlCacheElement = collections.namedtuple('TempUrlCacheElement',
@ -114,7 +115,7 @@ class GlanceImageService(object):
@tenacity.retry(
retry=tenacity.retry_if_exception_type(
exception.GlanceConnectionFailed),
stop=tenacity.stop_after_attempt(CONF.glance.num_retries + 1),
stop=utils.stop_after_retries('num_retries', group='glance'),
wait=tenacity.wait_fixed(1),
reraise=True
)

View file

@ -674,20 +674,33 @@ def get_instance_image_info(task, ipxe_enabled=False):
os.path.join(root_dir, node.uuid, 'boot_iso'))
return image_info
image_properties = None
d_info = deploy_utils.get_image_instance_info(node)
isap = node.driver_internal_info.get('is_source_a_path')
def _get_image_properties():
nonlocal image_properties
if not image_properties:
nonlocal image_properties, isap
if not image_properties and not isap:
i_service = service.get_image_service(
d_info['image_source'],
context=ctx)
image_properties = i_service.show(
d_info['image_source'])['properties']
# TODO(TheJulia): At some point, we should teach this code
# to understand that with a path, it *can* retrieve the
# manifest from the HTTP(S) endpoint, which can populate
# image_properties, and drive path to variable population
# like is done with basically Glance.
labels = ('kernel', 'ramdisk')
if not isap:
anaconda_labels = ('stage2', 'ks_template', 'ks_cfg')
else:
# When a path is used, a stage2 ramdisk can be determiend
# automatically by anaconda, so it is not an explicit
# requirement.
anaconda_labels = ('ks_template', 'ks_cfg')
if not (i_info.get('kernel') and i_info.get('ramdisk')):
# NOTE(rloo): If both are not specified in instance_info
# we won't use any of them. We'll use the values specified
@ -700,20 +713,13 @@ def get_instance_image_info(task, ipxe_enabled=False):
i_info[label] = str(image_properties[label + '_id'])
node.instance_info = i_info
node.save()
# TODO(TheJulia): Add functionality to look/grab the hints file
# for anaconda and just run with the entire path.
anaconda_labels = ()
if deploy_utils.get_boot_option(node) == 'kickstart':
isap = node.driver_internal_info.get('is_source_a_path')
# stage2: installer stage2 squashfs image
# ks_template: anaconda kickstart template
# ks_cfg - rendered ks_template
if not isap:
anaconda_labels = ('stage2', 'ks_template', 'ks_cfg')
else:
# When a path is used, a stage2 ramdisk can be determiend
# automatically by anaconda, so it is not an explicit
# requirement.
anaconda_labels = ('ks_template', 'ks_cfg')
# NOTE(rloo): We save stage2 & ks_template values in case they
# are changed by the user after we start using them and to
# prevent re-computing them again.
@ -733,26 +739,31 @@ def get_instance_image_info(task, ipxe_enabled=False):
else:
node.set_driver_internal_info(
'stage2', str(image_properties['stage2_id']))
# NOTE(TheJulia): A kickstart template is entirely independent
# of the stage2 ramdisk. In the end, it was the configuration which
# told anaconda how to execute.
if i_info.get('ks_template'):
# If the value is set, we always overwrite it, in the event
# a rebuild is occuring or something along those lines.
node.set_driver_internal_info('ks_template',
i_info['ks_template'])
# NOTE(TheJulia): A kickstart template is entirely independent
# of the stage2 ramdisk. In the end, it was the configuration which
# told anaconda how to execute.
if i_info.get('ks_template'):
# If the value is set, we always overwrite it, in the event
# a rebuild is occuring or something along those lines.
node.set_driver_internal_info('ks_template',
i_info['ks_template'])
else:
_get_image_properties()
# ks_template is an optional property on the image
if image_properties and 'ks_template' in image_properties:
node.set_driver_internal_info(
'ks_template', str(image_properties['ks_template']))
else:
_get_image_properties()
# ks_template is an optional property on the image
if 'ks_template' not in image_properties:
# If not defined, default to the overall system default
# kickstart template, as opposed to a user supplied
# template.
node.set_driver_internal_info(
'ks_template', CONF.anaconda.default_ks_template)
else:
node.set_driver_internal_info(
'ks_template', str(image_properties['ks_template']))
# If not defined, default to the overall system default
# kickstart template, as opposed to a user supplied
# template.
node.set_driver_internal_info(
'ks_template',
'file://' + os.path.abspath(
CONF.anaconda.default_ks_template
)
)
node.save()
for label in labels + anaconda_labels:
@ -1249,6 +1260,8 @@ def cache_ramdisk_kernel(task, pxe_info, ipxe_enabled=False):
CONF.deploy.http_root,
'stage2')
ensure_tree(os.path.dirname(file_path))
if 'ks_cfg' in pxe_info:
# ks_cfg is rendered later by the driver using ks_template. It cannot
# be fetched and cached.
t_pxe_info.pop('ks_cfg')

View file

@ -269,6 +269,9 @@ _FASTTRACK_LOOKUP_ALLOWED_STATES = (ENROLL, MANAGEABLE, AVAILABLE,
FASTTRACK_LOOKUP_ALLOWED_STATES = frozenset(_FASTTRACK_LOOKUP_ALLOWED_STATES)
"""States where API lookups are permitted with fast track enabled."""
FAILURE_STATES = frozenset((DEPLOYFAIL, CLEANFAIL, INSPECTFAIL,
RESCUEFAIL, UNRESCUEFAIL, ADOPTFAIL))
##############
# Power states

View file

@ -681,3 +681,18 @@ def is_fips_enabled():
except Exception:
pass
return False
def stop_after_retries(option, group=None):
"""A tenacity retry helper that stops after retries specified in conf."""
# NOTE(dtantsur): fetch the option inside of the nested call, otherwise it
# cannot be changed in runtime.
def should_stop(retry_state):
if group:
conf = getattr(CONF, group)
else:
conf = CONF
num_retries = getattr(conf, option)
return retry_state.attempt_number >= num_retries + 1
return should_stop

View file

@ -247,12 +247,21 @@ def do_next_clean_step(task, step_index, disable_ramdisk=None):
task.process_event(event)
def get_last_error(node):
last_error = _('By request, the clean operation was aborted')
if node.clean_step:
last_error += (
_(' during or after the completion of step "%s"')
% conductor_steps.step_id(node.clean_step)
)
return last_error
@task_manager.require_exclusive_lock
def do_node_clean_abort(task, step_name=None):
def do_node_clean_abort(task):
"""Internal method to abort an ongoing operation.
:param task: a TaskManager instance with an exclusive lock
:param step_name: The name of the clean step.
"""
node = task.node
try:
@ -270,12 +279,13 @@ def do_node_clean_abort(task, step_name=None):
set_fail_state=False)
return
last_error = get_last_error(node)
info_message = _('Clean operation aborted for node %s') % node.uuid
last_error = _('By request, the clean operation was aborted')
if step_name:
msg = _(' after the completion of step "%s"') % step_name
last_error += msg
info_message += msg
if node.clean_step:
info_message += (
_(' during or after the completion of step "%s"')
% node.clean_step
)
node.last_error = last_error
node.clean_step = None
@ -317,7 +327,7 @@ def continue_node_clean(task):
target_state = None
task.process_event('fail', target_state=target_state)
do_node_clean_abort(task, step_name)
do_node_clean_abort(task)
return
LOG.debug('The cleaning operation for node %(node)s was '

View file

@ -1336,7 +1336,8 @@ class ConductorManager(base_manager.BaseConductorManager):
callback=self._spawn_worker,
call_args=(cleaning.do_node_clean_abort, task),
err_handler=utils.provisioning_error_handler,
target_state=target_state)
target_state=target_state,
last_error=cleaning.get_last_error(node))
return
if node.provision_state == states.RESCUEWAIT:

View file

@ -527,7 +527,8 @@ class TaskManager(object):
self.release_resources()
def process_event(self, event, callback=None, call_args=None,
call_kwargs=None, err_handler=None, target_state=None):
call_kwargs=None, err_handler=None, target_state=None,
last_error=None):
"""Process the given event for the task's current state.
:param event: the name of the event to process
@ -540,6 +541,8 @@ class TaskManager(object):
prev_target_state)
:param target_state: if specified, the target provision state for the
node. Otherwise, use the target state from the fsm
:param last_error: last error to set on the node together with
the state transition.
:raises: InvalidState if the event is not allowed by the associated
state machine
"""
@ -572,13 +575,15 @@ class TaskManager(object):
# set up the async worker
if callback:
# clear the error if we're going to start work in a callback
self.node.last_error = None
# update the error if we're going to start work in a callback
self.node.last_error = last_error
if call_args is None:
call_args = ()
if call_kwargs is None:
call_kwargs = {}
self.spawn_after(callback, *call_args, **call_kwargs)
elif last_error is not None:
self.node.last_error = last_error
# publish the state transition by saving the Node
self.node.save()

View file

@ -302,9 +302,11 @@ def node_power_action(task, new_state, timeout=None):
# Set the target_power_state and clear any last_error, if we're
# starting a new operation. This will expose to other processes
# and clients that work is in progress.
node['target_power_state'] = target_state
node['last_error'] = None
# and clients that work is in progress. Keep the last_error intact
# if the power action happens as a result of a failure.
node.target_power_state = target_state
if node.provision_state not in states.FAILURE_STATES:
node.last_error = None
node.timestamp_driver_internal_info('last_power_state_change')
# NOTE(dtantsur): wipe token on shutting down, otherwise a reboot in
# fast-track (or an accidentally booted agent) will cause subsequent

View file

@ -114,6 +114,7 @@ opts = [
'will determine how many containers are created.')),
cfg.IntOpt('num_retries',
default=0,
mutable=True,
help=_('Number of retries when downloading an image from '
'glance.')),
]

View file

@ -81,9 +81,20 @@ opts = [
help='SNMP polling interval in seconds'),
cfg.StrOpt('snmp_auth_proto',
default='sha',
choices=[('sha', _('Secure Hash Algorithm 1'))],
choices=[('sha', _('Secure Hash Algorithm 1, supported in iRMC '
'S4 and S5.')),
('sha256', ('Secure Hash Algorithm 2 with 256 bits '
'digest, only supported in iRMC S6.')),
('sha384', ('Secure Hash Algorithm 2 with 384 bits '
'digest, only supported in iRMC S6.')),
('sha512', ('Secure Hash Algorithm 2 with 512 bits '
'digest, only supported in iRMC S6.'))],
help=_("SNMPv3 message authentication protocol ID. "
"Required for version 'v3'. 'sha' is supported.")),
"Required for version 'v3'. The valid options are "
"'sha', 'sha256', 'sha384' and 'sha512', while 'sha' is "
"the only supported protocol in iRMC S4 and S5, and "
"from iRMC S6, 'sha256', 'sha384' and 'sha512' are "
"supported, but 'sha' is not supported any more.")),
cfg.StrOpt('snmp_priv_proto',
default='aes',
choices=[('aes', _('Advanced Encryption Standard'))],

View file

@ -13,4 +13,6 @@
from oslo_db.sqlalchemy import enginefacade
# NOTE(dtantsur): we want sqlite as close to a real database as possible.
enginefacade.configure(sqlite_fk=True)
# FIXME(stephenfin): we need to remove reliance on autocommit semantics ASAP
# since it's not compatible with SQLAlchemy 2.0
enginefacade.configure(sqlite_fk=True, __autocommit=True)

View file

@ -211,6 +211,16 @@ class ImageHandler(object):
try:
os.link(image_file, published_file)
os.chmod(image_file, self._file_permission)
try:
utils.execute(
'/usr/sbin/restorecon', '-i', '-R', 'v', public_dir)
except FileNotFoundError as exc:
LOG.debug(
"Could not restore SELinux context on "
"%(public_dir)s, restorecon command not found.\n"
"Error: %(error)s",
{'public_dir': public_dir,
'error': exc})
except OSError as exc:
LOG.debug(

View file

@ -83,7 +83,9 @@ SNMP_V3_REQUIRED_PROPERTIES = {
SNMP_V3_OPTIONAL_PROPERTIES = {
'irmc_snmp_auth_proto': _("SNMPv3 message authentication protocol ID. "
"Required for version 'v3'. "
"'sha' is supported."),
"If using iRMC S4/S5, only 'sha' is supported."
"If using iRMC S6, the valid options are "
"'sha256', 'sha384', 'sha512'."),
'irmc_snmp_priv_proto': _("SNMPv3 message privacy (encryption) protocol "
"ID. Required for version 'v3'. "
"'aes' is supported."),
@ -243,7 +245,8 @@ def _parse_snmp_driver_info(node, info):
def _parse_snmp_v3_info(node, info):
snmp_info = {}
missing_info = []
valid_values = {'irmc_snmp_auth_proto': ['sha'],
valid_values = {'irmc_snmp_auth_proto': ['sha', 'sha256', 'sha384',
'sha512'],
'irmc_snmp_priv_proto': ['aes']}
valid_protocols = {'irmc_snmp_auth_proto': snmp.snmp_auth_protocols,
'irmc_snmp_priv_proto': snmp.snmp_priv_protocols}

View file

@ -191,9 +191,14 @@ def _inspect_hardware(node, existing_traits=None, **kwargs):
except (scci.SCCIInvalidInputError,
scci.SCCIClientError,
exception.SNMPFailure) as e:
advice = ""
if ("SNMP operation" in str(e)):
advice = ("The SNMP related parameters' value may be different "
"with the server, please check if you have set them "
"correctly.")
error = (_("Inspection failed for node %(node_id)s "
"with the following error: %(error)s") %
{'node_id': node.uuid, 'error': e})
"with the following error: %(error)s. (advice)s") %
{'node_id': node.uuid, 'error': e, 'advice': advice})
raise exception.HardwareInspectionFailure(error=error)
return props, macs, new_traits

View file

@ -27,9 +27,9 @@ from ironic.conductor import task_manager
from ironic.conductor import utils as manager_utils
from ironic import conf
from ironic.drivers import base
from ironic.drivers.modules import boot_mode_utils
from ironic.drivers.modules import ipmitool
from ironic.drivers.modules.irmc import common as irmc_common
from ironic.drivers import utils as driver_utils
irmc = importutils.try_import('scciclient.irmc')
@ -252,7 +252,7 @@ class IRMCManagement(ipmitool.IPMIManagement):
"Invalid boot device %s specified.") % device)
uefi_mode = (
driver_utils.get_node_capability(task.node, 'boot_mode') == 'uefi')
boot_mode_utils.get_boot_mode(task.node) == 'uefi')
# disable 60 secs timer
timeout_disable = "0x00 0x08 0x03 0x08"

View file

@ -203,9 +203,12 @@ def _set_power_state(task, target_state, timeout=None):
_wait_power_state(task, states.SOFT_REBOOT, timeout=timeout)
except exception.SNMPFailure as snmp_exception:
advice = ("The SNMP related parameters' value may be different with "
"the server, please check if you have set them correctly.")
LOG.error("iRMC failed to acknowledge the target state "
"for node %(node_id)s. Error: %(error)s",
{'node_id': node.uuid, 'error': snmp_exception})
"for node %(node_id)s. Error: %(error)s. %(advice)s",
{'node_id': node.uuid, 'error': snmp_exception,
'advice': advice})
raise exception.IRMCOperationError(operation=target_state,
error=snmp_exception)

View file

@ -54,20 +54,23 @@ class RedfishBIOS(base.BIOSInterface):
driver='redfish',
reason=_("Unable to import the sushy library"))
def _parse_allowable_values(self, allowable_values):
def _parse_allowable_values(self, node, allowable_values):
"""Convert the BIOS registry allowable_value list to expected strings
:param allowable_values: list of dicts of valid values for enumeration
:returns: list containing only allowable value names
"""
# Get name from ValueName if it exists, otherwise use DisplayValueName
# Get name from ValueName if it exists, otherwise use ValueDisplayName
new_list = []
for dic in allowable_values:
for key in dic:
if key == 'ValueName' or key == 'DisplayValueName':
new_list.append(dic[key])
break
key = dic.get('ValueName') or dic.get('ValueDisplayName')
if key:
new_list.append(key)
else:
LOG.warning('Cannot detect the value name for enumeration '
'item %(item)s for node %(node)s',
{'item': dic, 'node': node.uuid})
return new_list
@ -129,7 +132,8 @@ class RedfishBIOS(base.BIOSInterface):
setting[k] = getattr(reg, k, None)
if k == "allowable_values" and isinstance(setting[k],
list):
setting[k] = self._parse_allowable_values(setting[k])
setting[k] = self._parse_allowable_values(
task.node, setting[k])
LOG.debug('Cache BIOS settings for node %(node_uuid)s',
{'node_uuid': task.node.uuid})

View file

@ -1197,9 +1197,18 @@ class RedfishManagement(base.ManagementInterface):
:raises: RedfishError on an error from the Sushy library
:returns: A list of MAC addresses for the node
"""
system = redfish_utils.get_system(task.node)
try:
system = redfish_utils.get_system(task.node)
return list(redfish_utils.get_enabled_macs(task, system))
# NOTE(janders) we should handle MissingAttributeError separately
# from other SushyErrors - some servers (e.g. some Cisco UCSB and UCSX
# blades) are missing EthernetInterfaces attribute yet could be
# provisioned successfully if MAC information is provided manually AND
# this exception is caught and handled accordingly.
except sushy.exceptions.MissingAttributeError as exc:
LOG.warning('Cannot get MAC addresses for node %(node)s: %(exc)s',
{'node': task.node.uuid, 'exc': exc})
# if the exception is not a MissingAttributeError, raise it
except sushy.exceptions.SushyError as exc:
msg = (_('Failed to get network interface information on node '
'%(node)s: %(exc)s')

View file

@ -15,6 +15,7 @@
# under the License.
import collections
import hashlib
import os
from urllib import parse as urlparse
@ -198,43 +199,59 @@ class SessionCache(object):
_sessions = collections.OrderedDict()
def __init__(self, driver_info):
# Hash the password in the data structure, so we can
# include it in the session key.
# NOTE(TheJulia): Multiplying the address by 4, to ensure
# we meet a minimum of 16 bytes for salt.
pw_hash = hashlib.pbkdf2_hmac(
'sha512',
driver_info.get('password').encode('utf-8'),
str(driver_info.get('address') * 4).encode('utf-8'), 40)
self._driver_info = driver_info
# Assemble the session key and append the hashed password to it,
# which forces new sessions to be established when the saved password
# is changed, just like the username, or address.
self._session_key = tuple(
self._driver_info.get(key)
for key in ('address', 'username', 'verify_ca')
)
) + (pw_hash.hex(),)
def __enter__(self):
try:
return self.__class__._sessions[self._session_key]
except KeyError:
auth_type = self._driver_info['auth_type']
LOG.debug('A cached redfish session for Redfish endpoint '
'%(endpoint)s was not detected, initiating a session.',
{'endpoint': self._driver_info['address']})
auth_class = self.AUTH_CLASSES[auth_type]
auth_type = self._driver_info['auth_type']
authenticator = auth_class(
username=self._driver_info['username'],
password=self._driver_info['password']
)
auth_class = self.AUTH_CLASSES[auth_type]
sushy_params = {'verify': self._driver_info['verify_ca'],
'auth': authenticator}
if 'root_prefix' in self._driver_info:
sushy_params['root_prefix'] = self._driver_info['root_prefix']
conn = sushy.Sushy(
self._driver_info['address'],
**sushy_params
)
authenticator = auth_class(
username=self._driver_info['username'],
password=self._driver_info['password']
)
if CONF.redfish.connection_cache_size:
self.__class__._sessions[self._session_key] = conn
sushy_params = {'verify': self._driver_info['verify_ca'],
'auth': authenticator}
if 'root_prefix' in self._driver_info:
sushy_params['root_prefix'] = self._driver_info['root_prefix']
conn = sushy.Sushy(
self._driver_info['address'],
**sushy_params
)
if (len(self.__class__._sessions)
> CONF.redfish.connection_cache_size):
self._expire_oldest_session()
if CONF.redfish.connection_cache_size:
self.__class__._sessions[self._session_key] = conn
# Save a secure hash of the password into memory, so if we
# observe it change, we can detect the session is no longer valid.
return conn
if (len(self.__class__._sessions)
> CONF.redfish.connection_cache_size):
self._expire_oldest_session()
return conn
def __exit__(self, exc_type, exc_val, exc_tb):
# NOTE(etingof): perhaps this session token is no good

View file

@ -24,7 +24,6 @@ from glanceclient import exc as glance_exc
from keystoneauth1 import loading as ks_loading
from oslo_config import cfg
from oslo_utils import uuidutils
import tenacity
import testtools
from ironic.common import context
@ -204,20 +203,18 @@ class TestGlanceImageService(base.TestCase):
image_id = uuidutils.generate_uuid()
writer = NullWriter()
with mock.patch.object(tenacity, 'retry', autospec=True) as mock_retry:
# When retries are disabled, we should get an exception
self.config(num_retries=0, group='glance')
self.assertRaises(exception.GlanceConnectionFailed,
stub_service.download, image_id, writer)
# When retries are disabled, we should get an exception
self.config(num_retries=0, group='glance')
self.assertRaises(exception.GlanceConnectionFailed,
stub_service.download, image_id, writer)
# Now lets enable retries. No exception should happen now.
self.config(num_retries=1, group='glance')
importlib.reload(image_service)
stub_service = image_service.GlanceImageService(stub_client,
stub_context)
tries = [0]
stub_service.download(image_id, writer)
mock_retry.assert_called_once()
# Now lets enable retries. No exception should happen now.
self.config(num_retries=1, group='glance')
importlib.reload(image_service)
stub_service = image_service.GlanceImageService(stub_client,
stub_context)
tries = [0]
stub_service.download(image_id, writer)
def test_download_no_data(self):
self.client.fake_wrapped = None

View file

@ -1357,7 +1357,7 @@ class PXEInterfacesTestCase(db_base.DbTestCase):
'LiveOS',
'squashfs.img')),
'ks_template':
(CONF.anaconda.default_ks_template,
('file://' + CONF.anaconda.default_ks_template,
os.path.join(CONF.deploy.http_root,
self.node.uuid,
'ks.cfg.template')),
@ -1375,63 +1375,7 @@ class PXEInterfacesTestCase(db_base.DbTestCase):
self.assertEqual(expected_info, image_info)
# In the absense of kickstart template in both instance_info and
# image default kickstart template is used
self.assertEqual(CONF.anaconda.default_ks_template,
image_info['ks_template'][0])
calls = [mock.call(task.node), mock.call(task.node)]
boot_opt_mock.assert_has_calls(calls)
# Instance info gets presedence over kickstart template on the
# image
properties['properties'] = {'ks_template': 'glance://template_id'}
task.node.instance_info['ks_template'] = 'https://server/fake.tmpl'
image_show_mock.return_value = properties
image_info = pxe_utils.get_instance_image_info(
task, ipxe_enabled=False)
self.assertEqual('https://server/fake.tmpl',
image_info['ks_template'][0])
@mock.patch('ironic.drivers.modules.deploy_utils.get_boot_option',
return_value='kickstart', autospec=True)
@mock.patch.object(image_service.GlanceImageService, 'show', autospec=True)
def test_get_instance_image_info_with_kickstart_url(
self, image_show_mock, boot_opt_mock):
properties = {'properties': {u'kernel_id': u'instance_kernel_uuid',
u'ramdisk_id': u'instance_ramdisk_uuid',
u'image_source': u'http://path/to/os/'}}
expected_info = {'ramdisk':
('instance_ramdisk_uuid',
os.path.join(CONF.pxe.tftp_root,
self.node.uuid,
'ramdisk')),
'kernel':
('instance_kernel_uuid',
os.path.join(CONF.pxe.tftp_root,
self.node.uuid,
'kernel')),
'ks_template':
(CONF.anaconda.default_ks_template,
os.path.join(CONF.deploy.http_root,
self.node.uuid,
'ks.cfg.template')),
'ks_cfg':
('',
os.path.join(CONF.deploy.http_root,
self.node.uuid,
'ks.cfg'))}
image_show_mock.return_value = properties
self.context.auth_token = 'fake'
with task_manager.acquire(self.context, self.node.uuid,
shared=True) as task:
dii = task.node.driver_internal_info
dii['is_source_a_path'] = True
task.node.driver_internal_info = dii
task.node.save()
image_info = pxe_utils.get_instance_image_info(
task, ipxe_enabled=False)
self.assertEqual(expected_info, image_info)
# In the absense of kickstart template in both instance_info and
# image default kickstart template is used
self.assertEqual(CONF.anaconda.default_ks_template,
self.assertEqual('file://' + CONF.anaconda.default_ks_template,
image_info['ks_template'][0])
calls = [mock.call(task.node), mock.call(task.node)]
boot_opt_mock.assert_has_calls(calls)
@ -1463,7 +1407,7 @@ class PXEInterfacesTestCase(db_base.DbTestCase):
self.node.uuid,
'kernel')),
'ks_template':
(CONF.anaconda.default_ks_template,
('file://' + CONF.anaconda.default_ks_template,
os.path.join(CONF.deploy.http_root,
self.node.uuid,
'ks.cfg.template')),
@ -1490,7 +1434,7 @@ class PXEInterfacesTestCase(db_base.DbTestCase):
self.assertEqual(expected_info, image_info)
# In the absense of kickstart template in both instance_info and
# image default kickstart template is used
self.assertEqual(CONF.anaconda.default_ks_template,
self.assertEqual('file://' + CONF.anaconda.default_ks_template,
image_info['ks_template'][0])
calls = [mock.call(task.node), mock.call(task.node)]
boot_opt_mock.assert_has_calls(calls)
@ -1577,6 +1521,46 @@ class PXEInterfacesTestCase(db_base.DbTestCase):
list(fake_pxe_info.values()),
True)
@mock.patch.object(os, 'chmod', autospec=True)
@mock.patch.object(pxe_utils, 'TFTPImageCache', lambda: None)
@mock.patch.object(pxe_utils, 'ensure_tree', autospec=True)
@mock.patch.object(deploy_utils, 'fetch_images', autospec=True)
def test_cache_ramdisk_kernel_ipxe_anaconda(self, mock_fetch_image,
mock_ensure_tree, mock_chmod):
expected_path = os.path.join(CONF.deploy.http_root,
self.node.uuid)
fake_pxe_info = {'ramdisk':
('instance_ramdisk_uuid',
os.path.join(CONF.pxe.tftp_root,
self.node.uuid,
'ramdisk')),
'kernel':
('instance_kernel_uuid',
os.path.join(CONF.pxe.tftp_root,
self.node.uuid,
'kernel')),
'ks_template':
('file://' + CONF.anaconda.default_ks_template,
os.path.join(CONF.deploy.http_root,
self.node.uuid,
'ks.cfg.template')),
'ks_cfg':
('',
os.path.join(CONF.deploy.http_root,
self.node.uuid,
'ks.cfg'))}
expected = fake_pxe_info.copy()
expected.pop('ks_cfg')
with task_manager.acquire(self.context, self.node.uuid,
shared=True) as task:
pxe_utils.cache_ramdisk_kernel(task, fake_pxe_info,
ipxe_enabled=True)
mock_ensure_tree.assert_called_with(expected_path)
mock_fetch_image.assert_called_once_with(self.context, mock.ANY,
list(expected.values()),
True)
@mock.patch.object(pxe.PXEBoot, '__init__', lambda self: None)
class PXEBuildKickstartConfigOptionsTestCase(db_base.DbTestCase):

View file

@ -1124,12 +1124,12 @@ class DoNodeCleanTestCase(db_base.DbTestCase):
class DoNodeCleanAbortTestCase(db_base.DbTestCase):
@mock.patch.object(fake.FakeDeploy, 'tear_down_cleaning', autospec=True)
def _test__do_node_clean_abort(self, step_name, tear_mock):
def _test_do_node_clean_abort(self, clean_step, tear_mock):
node = obj_utils.create_test_node(
self.context, driver='fake-hardware',
provision_state=states.CLEANFAIL,
provision_state=states.CLEANWAIT,
target_provision_state=states.AVAILABLE,
clean_step={'step': 'foo', 'abortable': True},
clean_step=clean_step,
driver_internal_info={
'agent_url': 'some url',
'agent_secret_token': 'token',
@ -1139,11 +1139,11 @@ class DoNodeCleanAbortTestCase(db_base.DbTestCase):
'skip_current_clean_step': True})
with task_manager.acquire(self.context, node.uuid) as task:
cleaning.do_node_clean_abort(task, step_name=step_name)
cleaning.do_node_clean_abort(task)
self.assertIsNotNone(task.node.last_error)
tear_mock.assert_called_once_with(task.driver.deploy, task)
if step_name:
self.assertIn(step_name, task.node.last_error)
if clean_step:
self.assertIn(clean_step['step'], task.node.last_error)
# assert node's clean_step and metadata was cleaned up
self.assertEqual({}, task.node.clean_step)
self.assertNotIn('clean_step_index',
@ -1159,11 +1159,12 @@ class DoNodeCleanAbortTestCase(db_base.DbTestCase):
self.assertNotIn('agent_secret_token',
task.node.driver_internal_info)
def test__do_node_clean_abort(self):
self._test__do_node_clean_abort(None)
def test_do_node_clean_abort_early(self):
self._test_do_node_clean_abort(None)
def test__do_node_clean_abort_with_step_name(self):
self._test__do_node_clean_abort('foo')
def test_do_node_clean_abort_with_step(self):
self._test_do_node_clean_abort({'step': 'foo', 'interface': 'deploy',
'abortable': True})
@mock.patch.object(fake.FakeDeploy, 'tear_down_cleaning', autospec=True)
def test__do_node_clean_abort_tear_down_fail(self, tear_mock):

View file

@ -2730,7 +2730,8 @@ class DoProvisioningActionTestCase(mgr_utils.ServiceSetUpMixin,
# Node will be moved to tgt_prov_state after cleaning, not tested here
self.assertEqual(states.CLEANFAIL, node.provision_state)
self.assertEqual(tgt_prov_state, node.target_provision_state)
self.assertIsNone(node.last_error)
self.assertEqual('By request, the clean operation was aborted',
node.last_error)
mock_spawn.assert_called_with(
self.service, cleaning.do_node_clean_abort, mock.ANY)

View file

@ -196,7 +196,8 @@ class NodePowerActionTestCase(db_base.DbTestCase):
node = obj_utils.create_test_node(self.context,
uuid=uuidutils.generate_uuid(),
driver='fake-hardware',
power_state=states.POWER_OFF)
power_state=states.POWER_OFF,
last_error='failed before')
task = task_manager.TaskManager(self.context, node.uuid)
get_power_mock.return_value = states.POWER_OFF
@ -209,6 +210,27 @@ class NodePowerActionTestCase(db_base.DbTestCase):
self.assertIsNone(node['target_power_state'])
self.assertIsNone(node['last_error'])
@mock.patch.object(fake.FakePower, 'get_power_state', autospec=True)
def test_node_power_action_keep_last_error(self, get_power_mock):
"""Test node_power_action to keep last_error for failed states."""
node = obj_utils.create_test_node(self.context,
uuid=uuidutils.generate_uuid(),
driver='fake-hardware',
power_state=states.POWER_OFF,
provision_state=states.CLEANFAIL,
last_error='failed before')
task = task_manager.TaskManager(self.context, node.uuid)
get_power_mock.return_value = states.POWER_OFF
conductor_utils.node_power_action(task, states.POWER_ON)
node.refresh()
get_power_mock.assert_called_once_with(mock.ANY, mock.ANY)
self.assertEqual(states.POWER_ON, node['power_state'])
self.assertIsNone(node['target_power_state'])
self.assertEqual('failed before', node['last_error'])
@mock.patch('ironic.objects.node.NodeSetPowerStateNotification',
autospec=True)
@mock.patch.object(fake.FakePower, 'get_power_state', autospec=True)

View file

@ -202,7 +202,7 @@ class IRMCManagementTestCase(test_common.BaseIRMCTest):
self._test_management_interface_set_boot_device_ok(
None,
params,
"0x00 0x08 0x05 0x80 0x04 0x00 0x00 0x00")
"0x00 0x08 0x05 0xa0 0x04 0x00 0x00 0x00")
self._test_management_interface_set_boot_device_ok(
'bios',
params,
@ -216,7 +216,7 @@ class IRMCManagementTestCase(test_common.BaseIRMCTest):
self._test_management_interface_set_boot_device_ok(
None,
params,
"0x00 0x08 0x05 0xc0 0x04 0x00 0x00 0x00")
"0x00 0x08 0x05 0xe0 0x04 0x00 0x00 0x00")
self._test_management_interface_set_boot_device_ok(
'bios',
params,
@ -231,7 +231,7 @@ class IRMCManagementTestCase(test_common.BaseIRMCTest):
self._test_management_interface_set_boot_device_ok(
None,
params,
"0x00 0x08 0x05 0x80 0x08 0x00 0x00 0x00")
"0x00 0x08 0x05 0xa0 0x08 0x00 0x00 0x00")
self._test_management_interface_set_boot_device_ok(
'bios',
params,
@ -245,7 +245,7 @@ class IRMCManagementTestCase(test_common.BaseIRMCTest):
self._test_management_interface_set_boot_device_ok(
None,
params,
"0x00 0x08 0x05 0xc0 0x08 0x00 0x00 0x00")
"0x00 0x08 0x05 0xe0 0x08 0x00 0x00 0x00")
self._test_management_interface_set_boot_device_ok(
'bios',
params,
@ -260,7 +260,7 @@ class IRMCManagementTestCase(test_common.BaseIRMCTest):
self._test_management_interface_set_boot_device_ok(
None,
params,
"0x00 0x08 0x05 0x80 0x20 0x00 0x00 0x00")
"0x00 0x08 0x05 0xa0 0x20 0x00 0x00 0x00")
self._test_management_interface_set_boot_device_ok(
'bios',
params,
@ -274,7 +274,7 @@ class IRMCManagementTestCase(test_common.BaseIRMCTest):
self._test_management_interface_set_boot_device_ok(
None,
params,
"0x00 0x08 0x05 0xc0 0x20 0x00 0x00 0x00")
"0x00 0x08 0x05 0xe0 0x20 0x00 0x00 0x00")
self._test_management_interface_set_boot_device_ok(
'bios',
params,
@ -289,7 +289,7 @@ class IRMCManagementTestCase(test_common.BaseIRMCTest):
self._test_management_interface_set_boot_device_ok(
None,
params,
"0x00 0x08 0x05 0x80 0x18 0x00 0x00 0x00")
"0x00 0x08 0x05 0xa0 0x18 0x00 0x00 0x00")
self._test_management_interface_set_boot_device_ok(
'bios',
params,
@ -303,7 +303,7 @@ class IRMCManagementTestCase(test_common.BaseIRMCTest):
self._test_management_interface_set_boot_device_ok(
None,
params,
"0x00 0x08 0x05 0xc0 0x18 0x00 0x00 0x00")
"0x00 0x08 0x05 0xe0 0x18 0x00 0x00 0x00")
self._test_management_interface_set_boot_device_ok(
'bios',
params,
@ -318,7 +318,7 @@ class IRMCManagementTestCase(test_common.BaseIRMCTest):
self._test_management_interface_set_boot_device_ok(
None,
params,
"0x00 0x08 0x05 0x80 0x0c 0x00 0x00 0x00")
"0x00 0x08 0x05 0xa0 0x0c 0x00 0x00 0x00")
self._test_management_interface_set_boot_device_ok(
'bios',
params,
@ -332,7 +332,7 @@ class IRMCManagementTestCase(test_common.BaseIRMCTest):
self._test_management_interface_set_boot_device_ok(
None,
params,
"0x00 0x08 0x05 0xc0 0x0c 0x00 0x00 0x00")
"0x00 0x08 0x05 0xe0 0x0c 0x00 0x00 0x00")
self._test_management_interface_set_boot_device_ok(
'bios',
params,

View file

@ -597,7 +597,8 @@ class RedfishBiosRegistryTestCase(db_base.DbTestCase):
self.registry.registry_entries.attributes[1].read_only = False
self.registry.registry_entries.attributes[1].allowable_values =\
[{'ValueName': 'Enabled', 'ValueDisplayName': 'Enabled'},
{'ValueName': 'Disabled', 'ValueDisplayName': 'Disabled'}]
{'ValueDisplayName': 'Disabled'},
{'Invalid': 'banana'}]
self.registry.registry_entries.attributes[2].name = "BootDelay"
self.registry.registry_entries.attributes[2].attribute_type = "Integer"
self.registry.registry_entries.attributes[2].lower_bound = 5

View file

@ -1598,3 +1598,13 @@ class RedfishManagementTestCase(db_base.DbTestCase):
shared=True) as task:
self.assertEqual([],
task.driver.management.get_mac_addresses(task))
@mock.patch.object(redfish_utils, 'get_enabled_macs', autospec=True)
@mock.patch.object(redfish_utils, 'get_system', autospec=True)
def test_get_mac_addresses_missing_attr(self, mock_get_system,
mock_get_enabled_macs):
redfish_utils.get_enabled_macs.side_effect = (sushy.exceptions.
MissingAttributeError)
with task_manager.acquire(self.context, self.node.uuid,
shared=True) as task:
self.assertIsNone(task.driver.management.get_mac_addresses(task))

View file

@ -252,6 +252,7 @@ class RedfishUtilsAuthTestCase(db_base.DbTestCase):
redfish_utils.get_system(self.node)
redfish_utils.get_system(self.node)
self.assertEqual(1, mock_sushy.call_count)
self.assertEqual(len(redfish_utils.SessionCache._sessions), 1)
@mock.patch.object(sushy, 'Sushy', autospec=True)
def test_ensure_new_session_address(self, mock_sushy):
@ -269,6 +270,21 @@ class RedfishUtilsAuthTestCase(db_base.DbTestCase):
redfish_utils.get_system(self.node)
self.assertEqual(2, mock_sushy.call_count)
@mock.patch.object(sushy, 'Sushy', autospec=True)
def test_ensure_new_session_password(self, mock_sushy):
d_info = self.node.driver_info
d_info['redfish_username'] = 'foo'
d_info['redfish_password'] = 'bar'
self.node.driver_info = d_info
self.node.save()
redfish_utils.get_system(self.node)
d_info['redfish_password'] = 'foo'
self.node.driver_info = d_info
self.node.save()
redfish_utils.SessionCache._sessions = collections.OrderedDict()
redfish_utils.get_system(self.node)
self.assertEqual(2, mock_sushy.call_count)
@mock.patch.object(sushy, 'Sushy', autospec=True)
@mock.patch('ironic.drivers.modules.redfish.utils.'
'SessionCache.AUTH_CLASSES', autospec=True)

View file

@ -105,73 +105,96 @@ class RedfishImageHandlerTestCase(db_base.DbTestCase):
mock_swift_api.delete_object.assert_called_once_with(
'ironic_redfish_container', object_name)
@mock.patch.object(utils, 'execute', autospec=True)
@mock.patch.object(os, 'chmod', autospec=True)
@mock.patch.object(image_utils, 'shutil', autospec=True)
@mock.patch.object(os, 'link', autospec=True)
@mock.patch.object(os, 'mkdir', autospec=True)
def test_publish_image_local_link(
self, mock_mkdir, mock_link, mock_shutil, mock_chmod):
self, mock_mkdir, mock_link, mock_shutil, mock_chmod,
mock_execute):
self.config(use_swift=False, group='redfish')
self.config(http_url='http://localhost', group='deploy')
img_handler_obj = image_utils.ImageHandler(self.node.driver)
url = img_handler_obj.publish_image('file.iso', 'boot.iso')
self.assertEqual(
'http://localhost/redfish/boot.iso', url)
mock_mkdir.assert_called_once_with('/httpboot/redfish', 0o755)
mock_link.assert_called_once_with(
'file.iso', '/httpboot/redfish/boot.iso')
mock_chmod.assert_called_once_with('file.iso', 0o644)
mock_execute.assert_called_once_with(
'/usr/sbin/restorecon', '-i', '-R', 'v', '/httpboot/redfish')
@mock.patch.object(utils, 'execute', autospec=True)
@mock.patch.object(os, 'chmod', autospec=True)
@mock.patch.object(image_utils, 'shutil', autospec=True)
@mock.patch.object(os, 'link', autospec=True)
@mock.patch.object(os, 'mkdir', autospec=True)
def test_publish_image_local_link_no_restorecon(
self, mock_mkdir, mock_link, mock_shutil, mock_chmod,
mock_execute):
self.config(use_swift=False, group='redfish')
self.config(http_url='http://localhost', group='deploy')
img_handler_obj = image_utils.ImageHandler(self.node.driver)
url = img_handler_obj.publish_image('file.iso', 'boot.iso')
self.assertEqual(
'http://localhost/redfish/boot.iso', url)
mock_mkdir.assert_called_once_with('/httpboot/redfish', 0o755)
mock_link.assert_called_once_with(
'file.iso', '/httpboot/redfish/boot.iso')
mock_chmod.assert_called_once_with('file.iso', 0o644)
mock_execute.return_value = FileNotFoundError
mock_shutil.assert_not_called()
@mock.patch.object(utils, 'execute', autospec=True)
@mock.patch.object(os, 'chmod', autospec=True)
@mock.patch.object(image_utils, 'shutil', autospec=True)
@mock.patch.object(os, 'link', autospec=True)
@mock.patch.object(os, 'mkdir', autospec=True)
def test_publish_image_external_ip(
self, mock_mkdir, mock_link, mock_shutil, mock_chmod):
self, mock_mkdir, mock_link, mock_shutil, mock_chmod,
mock_execute):
self.config(use_swift=False, group='redfish')
self.config(http_url='http://localhost',
external_http_url='http://non-local.host',
group='deploy')
img_handler_obj = image_utils.ImageHandler(self.node.driver)
url = img_handler_obj.publish_image('file.iso', 'boot.iso')
self.assertEqual(
'http://non-local.host/redfish/boot.iso', url)
mock_mkdir.assert_called_once_with('/httpboot/redfish', 0o755)
mock_link.assert_called_once_with(
'file.iso', '/httpboot/redfish/boot.iso')
mock_chmod.assert_called_once_with('file.iso', 0o644)
mock_execute.assert_called_once_with(
'/usr/sbin/restorecon', '-i', '-R', 'v', '/httpboot/redfish')
@mock.patch.object(utils, 'execute', autospec=True)
@mock.patch.object(os, 'chmod', autospec=True)
@mock.patch.object(image_utils, 'shutil', autospec=True)
@mock.patch.object(os, 'link', autospec=True)
@mock.patch.object(os, 'mkdir', autospec=True)
def test_publish_image_external_ip_node_override(
self, mock_mkdir, mock_link, mock_shutil, mock_chmod):
self, mock_mkdir, mock_link, mock_shutil, mock_chmod,
mock_execute):
self.config(use_swift=False, group='redfish')
self.config(http_url='http://localhost',
external_http_url='http://non-local.host',
group='deploy')
img_handler_obj = image_utils.ImageHandler(self.node.driver)
self.node.driver_info["external_http_url"] = "http://node.override.url"
override_url = self.node.driver_info.get("external_http_url")
url = img_handler_obj.publish_image('file.iso', 'boot.iso',
override_url)
self.assertEqual(
'http://node.override.url/redfish/boot.iso', url)
mock_mkdir.assert_called_once_with('/httpboot/redfish', 0o755)
mock_link.assert_called_once_with(
'file.iso', '/httpboot/redfish/boot.iso')
mock_chmod.assert_called_once_with('file.iso', 0o644)
mock_execute.assert_called_once_with(
'/usr/sbin/restorecon', '-i', '-R', 'v', '/httpboot/redfish')
@mock.patch.object(os, 'chmod', autospec=True)
@mock.patch.object(image_utils, 'shutil', autospec=True)

5
releasenotes/config.yaml Normal file
View file

@ -0,0 +1,5 @@
---
# Ignore the kilo-eol tag because that branch does not work with reno
# and contains no release notes.
# Ignore bugfix tags because their releasenotes are covered under stable
closed_branch_tag_re: 'r"(?!^(kilo-|bugfix-)).+-eol$"'

View file

@ -0,0 +1,5 @@
---
fixes:
- |
Fixes detecting of allowable values for a BIOS settings enumeration in
the ``redfish`` BIOS interface when only ``ValueDisplayName`` is provided.

View file

@ -0,0 +1,8 @@
---
fixes:
- |
When aborting cleaning, the ``last_error`` field is no longer initially
empty. It is now populated on the state transition to ``clean failed``.
- |
When cleaning or deployment fails, the ``last_error`` field is no longer
temporary set to ``None`` while the power off action is running.

View file

@ -0,0 +1,16 @@
---
fixes:
- |
Fixes an issue where image information retrieval would fail when a
path was supplied when using the ``anaconda`` deploy interface,
as `HTTP` ``HEAD`` requests on a URL path have no ``Content-Length``.
We now consider if a path is used prior to attempting to collect
additional configuration data from what is normally expected to
be Glance.
- |
Fixes an issue where the fallback to a default kickstart template
value would result in error indicating
"Scheme-less image href is not a UUID".
This was becaues the handling code falling back to the default
did not explicitly indicate it was a file URL before saving the
value.

View file

@ -0,0 +1,7 @@
---
fixes:
- |
Fixes an issue where if selinux is enabled and enforcing, and
the published image is a hardlink, the source selinux context
is preserved, causing access denied when retrieving the image
using hardlink URL.

View file

@ -0,0 +1,9 @@
---
fixes:
- |
Fixes the bug where provisioning a Redfish managed node fails if the BMC
doesn't support EthernetInterfaces attribute, even if MAC address
information is provided manually. This is done by handling of
MissingAttributeError sushy exception in get_mac_addresses() method.
This fix is needed to successfully provision machines such as Cisco UCSB
and UCSX.

View file

@ -0,0 +1,5 @@
---
upgrade:
- |
Adds ``sha256``, ``sha384`` and ``sha512`` as supported SNMPv3
authentication protocols to iRMC driver.

View file

@ -0,0 +1,12 @@
---
fixes:
- |
Modify iRMC driver to use ironic.conf [deploy] default_boot_mode to determine
default boot_mode.
upgrades:
- Existing iRMC nodes without an explicitly set ``capabilities`` ``boot_mode``
will change from boot mode ``bios`` to the value of ``[deploy]
default_boot_mode`` (which defaults to ``uefi`` since release 18.2.0).
Explicitly setting ``capabilities`` ``boot_mode:bios`` on existing nodes
without any ``boot_mode`` set is recommended.

View file

@ -0,0 +1,5 @@
---
fixes:
- |
Fixes API error messages with jsonschema>=4.8. A possible root cause is
now detected for generic schema errors.

View file

@ -0,0 +1,7 @@
---
fixes:
- |
Fixes an issue where the Redfish session cache would continue using an
old session when a password for a Redfish BMC was changed. Now the old
session will not be found in this case, and a new session will be created
with the latest credential information available.

View file

@ -1,4 +0,0 @@
---
# Ignore the kilo-eol tag because that branch does not work with reno
# and contains no release notes.
closed_branch_tag_re: "(.+)(?<!kilo)-eol"

12
tox.ini
View file

@ -1,14 +1,15 @@
[tox]
minversion = 3.18.0
skipsdist = True
envlist = py3,pep8
ignore_basepython_conflict=true
requires =
tox<4
[testenv]
usedevelop = True
basepython = python3
setenv = VIRTUAL_ENV={envdir}
PYTHONDONTWRITEBYTECODE = 1
PYTHONDONTWRITEBYTECODE=1
LANGUAGE=en_US
LC_ALL=en_US.UTF-8
PYTHONWARNINGS=default::DeprecationWarning
@ -18,7 +19,12 @@ deps =
-r{toxinidir}/test-requirements.txt
commands =
stestr run --slowest {posargs}
passenv = http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY
passenv = http_proxy
HTTP_PROXY
https_proxy
HTTPS_PROXY
no_proxy
NO_PROXY
[testenv:unit-with-driver-libs]
deps = {[testenv]deps}

View file

@ -10,10 +10,19 @@
# TODO(TheJulia): Explicitly pull in DIB until we get a release cut.
- opendev.org/openstack/diskimage-builder
- opendev.org/openstack/ironic
- opendev.org/openstack/ironic-python-agent
- name: opendev.org/openstack/ironic-python-agent
override-checkout: bugfix/9.0
- opendev.org/openstack/ironic-python-agent-builder
- opendev.org/openstack/ironic-tempest-plugin
- opendev.org/openstack/virtualbmc
- name: openstack/neutron
override-checkout: stable/zed
- name: openstack/nova
override-checkout: stable/zed
- name: openstack/swift
override-checkout: stable/zed
- name: openstack/requirements
override-checkout: stable/zed
irrelevant-files:
- ^.*\.rst$
- ^api-ref/.*$
@ -1033,8 +1042,12 @@
description: Ironic unit tests run with Sushy from source
parent: openstack-tox
required-projects:
- opendev.org/openstack/ironic
- opendev.org/openstack/sushy
- name: opendev.org/openstack/ironic
override-checkout: bugfix/21.0
- name: opendev.org/openstack/sushy
override-checkout: stable/zed
- name: openstack/requirements
override-checkout: stable/zed
irrelevant-files:
- ^.*\.rst$
- ^api-ref/.*$
@ -1051,3 +1064,78 @@
# NOTE(dtantsur): this job will be run on sushy as well, so it's
# important to set the working dir to the Ironic checkout.
zuul_work_dir: "{{ ansible_user_dir }}/{{ zuul.projects['opendev.org/openstack/ironic'].src_dir }}"
- project-template:
name: openstack-python3-zed-jobs-ironic-bugfix210
description: |
Runs unit tests for an OpenStack Python project under the CPython
version 3 releases designated for testing in the Yoga release.
check:
jobs:
- openstack-tox-pep8:
required-projects:
- name: openstack/requirements
override-checkout: stable/zed
- openstack-tox-py38:
required-projects:
- name: openstack/requirements
override-checkout: stable/zed
- openstack-tox-py39:
required-projects:
- name: openstack/requirements
override-checkout: stable/zed
- openstack-tox-py310:
voting: false
required-projects:
- name: openstack/requirements
override-checkout: stable/zed
gate:
jobs:
- openstack-tox-pep8:
required-projects:
- name: openstack/requirements
override-checkout: stable/zed
- openstack-tox-py38:
required-projects:
- name: openstack/requirements
override-checkout: stable/zed
- openstack-tox-py39:
required-projects:
- name: openstack/requirements
override-checkout: stable/zed
- project-template:
name: openstack-python3-zed-jobs-ironic-bugfix210-arm64
description: |
Runs unit tests for an OpenStack Python project under the CPython
version 3 releases designated for testing in the Yoga release.
check:
jobs:
- openstack-tox-py38-arm64:
voting: false
required-projects:
- name: openstack/requirements
override-checkout: stable/zed
- openstack-tox-py39-arm64:
voting: false
required-projects:
- name: openstack/requirements
override-checkout: stable/zed
- project-template:
name: openstack-cover-jobs-ironic-bugfix210
description: |
Runs openstack-tox-cover in only the check pipeline using
yoga upper-constraints.
check:
jobs:
- openstack-tox-cover:
required-projects:
- name: openstack/requirements
override-checkout: stable/zed
gate:
jobs:
- openstack-tox-cover:
required-projects:
- name: openstack/requirements
override-checkout: stable/zed

View file

@ -1,12 +1,11 @@
- project:
templates:
- check-requirements
- openstack-cover-jobs
- openstack-python3-zed-jobs
- openstack-python3-zed-jobs-arm64
- openstack-cover-jobs-ironic-bugfix210
- openstack-python3-zed-jobs-ironic-bugfix210
- openstack-python3-zed-jobs-ironic-bugfix210-arm64
- periodic-stable-jobs
- publish-openstack-docs-pti
- release-notes-jobs-python3
check:
jobs:
- ironic-tox-unit-with-driver-libs