Compare commits

..

13 commits

Author SHA1 Message Date
Zuul
fc2549a223 Merge "Calculate missing checksum for file:// based images" into bugfix/24.0 2025-01-10 10:46:31 +00:00
Riccardo Pittau
8ba74445b9 [bugfix only] Pin upper-constraints
This is to fix tox tests.
The bugfix branches should pin upper constraints when they are
created.

Change-Id: I175a8394c5cad3fb3f99534dc2cf15eb92654aa4
2025-01-08 16:37:48 +01:00
Steve Baker
a39a8ee134 Calculate missing checksum for file:// based images
The fix for CVE-2024-47211 results in image checksum being required in
all cases. However there is no requirement for checksums in
file:// based images.

This change checks for this situation. When checksum is missing for
file:// based image_source it is now calculated on-the-fly.

Change-Id: Ib2fd5ddcbee9a9d1c7e32770ec3d9b6cb20a2e2a
(cherry picked from commit b827c7bf72)
2025-01-08 10:19:00 +13:00
Julia Kreger
3aa45b7a9d Fix actual size calculation for storage fallback logic
When we were fixing the qemu-img related CVE, in our rush we didn't
realize that the logic for storage sizing, which only falls back to
actual size didn't match the prior interface exactly. Instead of
disk_size, we have actual_size on the format inspector.

This was not discovered because all of the code handling that side
of the unit tests were mocked.

Anyhow, easy fix.

Closes-Bug: 2083520
Change-Id: Ic4390d578f564f245d7fb4013f2ba5531aee9ea9
(cherry picked from commit 90f9fa3eb0)
2024-10-09 07:09:23 +00:00
Julia Kreger
c164641b45 Checksum files before raw conversion
While working another issue, we discovered that support added to
the ironic-conductor process combined the image_download_source
option of "local" with the "force_raw" option resulted in a case
where Ironic had no concept to checksum the files *before* the
conductor process triggered an image format conversion and
then records new checksum values.

In essence, this opened the user requested image file to be
suspetible to a theoretical man-in-the-middle attack OR
the remote server replacing the content with an unknown file,
such as a new major version.

The is at odds with Ironic's security model where we do want to
ensure the end user of ironic is asserting a known checksum for
the image artifact they are deploying, so they are aware of the
present state. Due to the risk, we chose to raise this as a CVE,
as infrastructure operators should likely apply this patch.

As a note, if your *not* forcing all images to be raw format
through the conductor, then this issue is likely not a major
issue for you, but you should still apply the patch.

This is being tracked as CVE-2024-47211.

Closes-Bug: 2076289
Change-Id: Id6185b317aa6e4f4363ee49f77e688701995323a
Signed-off-by: Julia Kreger <juliaashleykreger@gmail.com>
2024-09-25 14:57:26 -07:00
Dmitry Tantsur
091a0e8512
Fix inspection if bmc_address or bmc_v6address is None
IPA started sending None when the device is not found.

Change-Id: Ibeef33ff9a0acdb7c605bc46ef9e5d203c7aaa6d
(cherry picked from commit ad03a4c32d)
2024-09-12 17:08:38 +02:00
Dmitry Tantsur
d6dbf988f6
Try limiting MTU to at least 1280
Change-Id: If8f9907df62019b3cf6d6df7d83d5ff421f6be65
(cherry picked from commit 510f87a033)
2024-09-12 17:05:38 +02:00
Zuul
3a94de6bea Merge "CVE-2024-44982: Harden all image handling and conversion code" into bugfix/24.0 2024-09-05 04:09:24 +00:00
Julia Kreger
07bb2caf3c CVE-2024-44982: Harden all image handling and conversion code
It was recently learned by the OpenStack community that running qemu-img
on un-trusted images without a format pre-specified can present a
security risk. Furthermore, some of these specific image formats have
inherently unsafe features. This is rooted in how qemu-img operates
where all image drivers are loaded and attempt to evaluate the input data.
This can result in several different vectors which this patch works to
close.

This change imports the qemu-img handling code from Ironic-Lib into
Ironic, and image format inspection code, which has been developed by
the wider community to validate general safety of images before converting
them for use in	a deployment.

This patch contains functional changes related to the hardening of these
calls including how images are handled, and updates documentation to
provide context and guidance to operators.

Closes-Bug: 2071740
Change-Id: I7fac5c64f89aec39e9755f0930ee47ff8f7aed47
Signed-off-by: Julia Kreger <juliaashleykreger@gmail.com>
2024-09-04 15:19:49 -07:00
Zuul
78071be02a Merge "CI: Disable metal3-integration test job" into bugfix/24.0 2024-09-04 18:33:18 +00:00
Julia Kreger
ca4f4bf86e CI: Disable metal3-integration test job
The metal3-integration CI job is not smart enough to know which
branches to pull for it to correctly test the branch, and so it
should be disabled on this branch.

Change-Id: If04a5b97722cc1a8e125c3348e09339c3a7ce0eb
(cherry picked from commit 4cb0af7fd6)
2024-09-04 10:56:31 -07:00
Riccardo Pittau
a31a49eb07 [bugfix only] Remove deleted lextudio packages
The lextudio pyasn1 and pyasn1-modules packages have been yanked.
Just use the normal ones.

Change-Id: Ia63be2f04e2cd0438a4a14ac7b4a7cdb63bd8093
2024-08-12 09:19:34 +02:00
OpenStack Release Bot
3464aef661 Update .gitreview for bugfix/24.0
Change-Id: I36e0dd100cd5605a5a65fc599a5fa97a25f06af8
2024-02-01 11:20:50 +00:00
1120 changed files with 28964 additions and 77171 deletions

View file

@ -1,7 +0,0 @@
[run]
branch = True
source = ironic
omit = ironic/tests/*
[report]
ignore_errors = True

7
.gitignore vendored
View file

@ -8,16 +8,10 @@
_build
doc/source/contributor/api/
_static
doc/source/admin/drivers/redfish/OpenStackIronicProfile.*.rst
# release notes build
releasenotes/build
# sample config files
etc/ironic/ironic.conf.sample
etc/ironic/ironic.networking.conf.sample
etc/ironic/policy.yaml.sample
# Packages/installer info
*.egg
*.egg-info
@ -34,7 +28,6 @@ develop-eggs
# Other
*.DS_Store
.idea
.vscode
.testrepository
.stestr
.tox

View file

@ -2,3 +2,4 @@
host=review.opendev.org
port=29418
project=openstack/ironic.git
defaultbranch=bugfix/24.0

View file

@ -1,104 +0,0 @@
---
default_language_version:
# force all unspecified python hooks to run python3
python: python3
exclude: |
(?x)^(
venv/|
.venv/|
env/|
.tox/|
build/|
dist/
)
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v6.0.0
hooks:
- id: trailing-whitespace
# NOTE(JayF): We shouldn't modify release notes after their
# associated release. Instead, ignore these minor lint issues.
exclude: |
(?x)(
^releasenotes/notes/redfish-raid-get-drives-fix-18d46f3e7275b0ef.yaml$|
^releasenotes/notes/provide_mountpoint-58cfd25b6dd4cfde.yaml$|
^releasenotes/notes/ipmi-retries-min-command-interval-070cd7eff5eb74dd.yaml$|
^releasenotes/notes/deprecate-ibmc-9106cc3a81171738.yaml$|
^releasenotes/notes/fix-cve-2016-4985-b62abae577025365.yaml$
)
- id: mixed-line-ending
args: ['--fix', 'lf']
exclude: |
(?x)(
.*.svg$|
^releasenotes/notes/ibmc-driver-45fcf9f50ebf0193.yaml$|
)
- id: fix-byte-order-marker
- id: check-merge-conflict
- id: debug-statements
- id: check-json
files: .*\.json$
- id: check-yaml
files: .*\.(yaml|yml)$
exclude: releasenotes/.*$
- repo: https://github.com/Lucas-C/pre-commit-hooks
rev: v1.5.5
hooks:
- id: remove-tabs
exclude: '.*\.(svg)$'
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.14.6
hooks:
- id: ruff-check
args: ['--fix', '--unsafe-fixes']
- repo: https://opendev.org/openstack/hacking
rev: 8.0.0
hooks:
- id: hacking
additional_dependencies: []
exclude: '^(doc|releasenotes|tools)/.*$'
- repo: https://github.com/codespell-project/codespell
rev: v2.2.6
hooks:
- id: codespell
args: [--write-changes]
- repo: https://github.com/sphinx-contrib/sphinx-lint
rev: v1.0.2
hooks:
- id: sphinx-lint
args: [--enable=default-role]
files: ^doc/|releasenotes|api-ref
- repo: https://opendev.org/openstack/bashate
rev: 2.1.1
hooks:
- id: bashate
args: ["-iE006,E044", "-eE005,E042"]
name: bashate
description: This hook runs bashate for linting shell scripts
entry: bashate
language: python
types: [shell]
- repo: https://github.com/PyCQA/bandit
rev: 1.9.1
hooks:
- id: bandit
args: ["-x", "tests/", "-n5", "-ll", "-c", "tools/bandit.yml"]
name: bandit
description: 'Bandit is a tool for finding common security issues in Python code'
entry: bandit
language: python
language_version: python3
types: [ python ]
require_serial: true
- repo: https://github.com/PyCQA/doc8
rev: v2.0.0
hooks:
- id: doc8
- repo: local
hooks:
- id: check-releasenotes
name: check-releasenotes
language: python
entry: python tools/check-releasenotes.py

View file

@ -2,27 +2,21 @@
Ironic
======
Team and repository tags
------------------------
.. image:: https://governance.openstack.org/tc/badges/ironic.svg
:target: https://governance.openstack.org/tc/reference/tags/index.html
Overview
--------
Ironic consists of an API and plug-ins for managing and provisioning
physical machines in a security-aware and fault-tolerant manner. It can be
used with nova as a hypervisor driver, or standalone service.
By default, it will use PXE and IPMI/Redfish to interact with bare metal
machines. Some drivers, like the Redfish drivers, also support advanced
features like leveraging HTTPBoot or Virtual Media based boot operations
depending on the configuration by the user. Ironic also supports
vendor-specific plug-ins which may implement additional functionality,
however many vendors have chosen to focus on their Redfish implementations
instead of customized drivers.
Numerous ways exist to leverage Ironic to deploy a bare metal node, above
and beyond asking Nova for a "bare metal" instance, or for asking Ironic
to manually deploy a specific machine. Bifrost and Metal3 are related
projects which seek to simplify the use and interaction of Ironic.
used with nova as a hypervisor driver, or standalone service using bifrost.
By default, it will use PXE and IPMI to interact with bare metal machines.
Ironic also supports vendor-specific plug-ins which may implement additional
functionality.
Ironic is distributed under the terms of the Apache License, Version 2.0. The
full terms and conditions of this license are detailed in the LICENSE file.
@ -39,8 +33,8 @@ Project resources
* Design Specifications: https://specs.openstack.org/openstack/ironic-specs/
Project status, bugs, and requests for feature enhancements (RFEs) are tracked
in Launchpad:
https://launchpad.net/ironic
in StoryBoard:
https://storyboard.openstack.org/#!/project/943
For information on how to contribute to ironic, see
https://docs.openstack.org/ironic/latest/contributor

View file

@ -273,7 +273,7 @@ GET v1/lookup?node_uuid=$NID > lookup-node-response.json
# and the node's driver is "fake", to avoid potential races
# with internal processes that lock the Node
# this corrects an intentional omission in some of the samples
# this corrects an intentional ommission in some of the samples
PATCH v1/nodes/$NID node-update-driver-info-request.json > node-update-driver-info-response.json
GET v1/nodes/$NID/management/boot_device/supported > node-get-supported-boot-devices-response.json
@ -359,9 +359,3 @@ sed -i "s/$(hostname)/$DOC_IRONIC_CONDUCTOR_HOSTNAME/" *.json
sed -i "s/created_at\": \".*\"/created_at\": \"$DOC_CREATED_AT\"/" *.json
sed -i "s/updated_at\": \".*\"/updated_at\": \"$DOC_UPDATED_AT\"/" *.json
sed -i "s/provision_updated_at\": \".*\"/provision_updated_at\": \"$DOC_PROVISION_UPDATED_AT\"/" *.json
##########
# Clean up
openstack baremetal node maintenance set $NID
openstack baremetal node delete $NID

View file

@ -52,7 +52,7 @@ parameters must be missing or match the provided node.
.. versionadded:: 1.79
A node with the same name as the allocation ``name`` is moved to the
start of the derived candidate list.
start of the derived candidiate list.
Normal response codes: 201

View file

@ -68,7 +68,7 @@ and method.
This endpoint passes the request directly to the hardware driver. The
HTTP BODY must be parseable JSON, which will be converted to parameters passed
to that function. Unparsable JSON, missing parameters, or excess parameters
to that function. Unparseable JSON, missing parameters, or excess parameters
will cause the request to be rejected with an HTTP 400 error.
Normal response code: 200 202

View file

@ -1,21 +0,0 @@
.. -*- rst -*-
=========================
Get Virtual Media (nodes)
=========================
.. versionadded:: 1.93
Get a list of virtual media devices attached to a node using
the ``v1/nodes/{node_ident}/vmedia`` endpoint.
Get virtual media devices attached to a node
============================================
.. rest_method:: GET /v1/nodes/{node_ident}/vmedia
Get virtual media devices attached to a node.
Normal response code: 200
Error codes: 400,401,403,404,409

View file

@ -1,260 +0,0 @@
.. -*- rst -*-
===================================
Inspection rules (inspection_rules)
===================================
Inspection Rules consist of conditions that evaluate against inspection data
and actions that run on a node when conditions are met during inspection.
.. versionadded:: 1.96
Inspection Rules API was introduced.
Create Inspection Rule
======================
.. rest_method:: POST /v1/inspection_rules
Creates an inspection rule.
.. versionadded:: 1.96
Inspection Rules API was introduced.
Normal response codes: 201
Error response codes: 400, 401, 403, 409
Request
-------
.. rest_parameters:: parameters.yaml
- uuid: req_uuid
- description: inspection_rule_description
- conditions: inspection_rule_conditions
- actions: inspection_rule_actions
- phase: inspection_rule_phase
- priority: inspection_rule_priority
- sensitive: inspection_rule_sensitive
Request Inspection Rule Condition
---------------------------------
.. rest_parameters:: parameters.yaml
- op: inspection_rule_condition_op
- args: inspection_rule_condition_args
- loop: inspection_rule_condition_loop
- multiple: inspection_rule_condition_multiple
Request Inspection Rule Action
------------------------------
.. rest_parameters:: parameters.yaml
- op: inspection_rule_action_op
- args: inspection_rule_action_args
- loop: inspection_rule_action_loop
Request Example
---------------
.. literalinclude:: samples/inspection-rule-create-request.json
:language: javascript
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- uuid: uuid
- description: inspection_rule_description
- conditions: inspection_rule_conditions
- actions: inspection_rule_actions
- phase: inspection_rule_phase
- priority: inspection_rule_priority
- sensitive: inspection_rule_sensitive
- created_at: created_at
- updated_at: updated_at
- links: links
Response Example
----------------
.. literalinclude:: samples/inspection-rule-create-response.json
:language: javascript
List Inspection Rules
=====================
.. rest_method:: GET /v1/inspection_rules
Lists all inspection rules.
.. versionadded:: 1.96
Inspection Rules API was introduced.
Normal response codes: 200
Error response codes: 400, 401, 403, 404
Request
-------
.. rest_parameters:: parameters.yaml
- detail: detail
- phase: req_inspection_rule_phase
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- uuid: uuid
- description: inspection_rule_description
- phase: inspection_rule_phase
- priority: inspection_rule_priority
- sensitive: inspection_rule_sensitive
- created_at: created_at
- updated_at: updated_at
- links: links
- conditions: inspection_rule_conditions
- actions: inspection_rule_actions
Response Example
----------------
**Example inspection rule list response:**
.. literalinclude:: samples/inspection-rule-list-response.json
:language: javascript
**Example detailed inspection rule list response:**
.. literalinclude:: samples/inspection-rule-detail-response.json
:language: javascript
Show Inspection Rule Details
============================
.. rest_method:: GET /v1/inspection_rules/{rule_id}
Shows details for an inspection rule.
.. versionadded:: 1.96
Inspection Rules API was introduced.
Normal response codes: 200
Error response codes: 400, 401, 403, 404
Request
-------
.. rest_parameters:: parameters.yaml
- rule_id: inspection_rule_ident
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- uuid: uuid
- description: inspection_rule_description
- conditions: inspection_rule_conditions
- actions: inspection_rule_actions
- phase: inspection_rule_phase
- priority: inspection_rule_priority
- sensitive: inspection_rule_sensitive
- created_at: created_at
- updated_at: updated_at
- links: links
Response Example
----------------
.. literalinclude:: samples/inspection-rule-show-response.json
:language: javascript
Update an Inspection Rule
=========================
.. rest_method:: PATCH /v1/inspection_rules/{rule_id}
Update an inspection rule.
.. versionadded:: 1.96
Inspection Rules API was introduced.
Normal response code: 200
Error response codes: 400, 401, 403, 404, 409
Request
-------
The BODY of the PATCH request must be a JSON PATCH document, adhering to
`RFC 6902 <https://tools.ietf.org/html/rfc6902>`_.
.. rest_parameters:: parameters.yaml
- rule_id: inspection_rule_ident
.. literalinclude:: samples/inspection-rule-update-request.json
:language: javascript
Response
--------
.. rest_parameters:: parameters.yaml
- uuid: uuid
- description: inspection_rule_description
- conditions: inspection_rule_conditions
- actions: inspection_rule_actions
- phase: inspection_rule_phase
- priority: inspection_rule_priority
- sensitive: inspection_rule_sensitive
- created_at: created_at
- updated_at: updated_at
- links: links
.. literalinclude:: samples/inspection-rule-update-response.json
:language: javascript
Delete Inspection Rule
======================
.. rest_method:: DELETE /v1/inspection_rules/{rule_id}
Deletes an inspection rule.
.. versionadded:: 1.96
Inspection Rules API was introduced.
Normal response codes: 204
Error response codes: 400, 401, 403, 404
Request
-------
.. rest_parameters:: parameters.yaml
- rule_id: inspection_rule_ident
Delete All Inspection Rules
===========================
.. rest_method:: DELETE /v1/inspection_rules
Deletes all non-built-in inspection rules.
.. versionadded:: 1.96
Inspection Rules API was introduced.
Normal response codes: 204
Error response codes: 400, 401, 403

View file

@ -35,7 +35,7 @@ depending on the service configuration.
Validate Node
=============
===============
.. rest_method:: GET /v1/nodes/{node_ident}/validate
@ -85,7 +85,7 @@ the Node's driver does not support that interface.
Set Maintenance Flag
====================
=============================
.. rest_method:: PUT /v1/nodes/{node_ident}/maintenance
@ -110,7 +110,7 @@ Request
.. literalinclude:: samples/node-maintenance-request.json
Clear Maintenance Flag
======================
==============================
.. rest_method:: DELETE /v1/nodes/{node_ident}/maintenance
@ -198,7 +198,7 @@ Response
Get Supported Boot Devices
==========================
===========================
.. rest_method:: GET /v1/nodes/{node_ident}/management/boot_device/supported
@ -450,19 +450,6 @@ detailed documentation of the Ironic State Machine is available
``disable_ramdisk`` can be provided to avoid booting the ramdisk during
manual cleaning.
.. versionadded:: 1.87
A node can be serviced by setting the provision target state to ``service``
with a list of ``service_steps``.
.. versionadded:: 1.92
Added the ability to allow for predefined sets of steps to be executed
during provisioning by passing in a ``runbook_ident`` that's already
approved for the given node, as an alternative to providing ``clean_steps``
or ``service_steps`` dictionary.
.. versionadded:: 1.95
Added the ability to set/unset ``disable_power_off`` on a node.
Normal response code: 202
Error codes:
@ -481,10 +468,8 @@ Request
- configdrive: configdrive
- clean_steps: clean_steps
- deploy_steps: deploy_steps
- service_steps: service_steps
- rescue_password: rescue_password
- disable_ramdisk: disable_ramdisk
- runbook: runbook_ident
**Example request to deploy a Node, using a configdrive served via local webserver:**
@ -498,17 +483,6 @@ Request
.. literalinclude:: samples/node-set-clean-state.json
**Example request to service a Node, with custom service step:**
.. literalinclude:: samples/node-set-service-state.json
**Example request to set provision state for a Node with a runbook:**
.. literalinclude:: samples/node-set-provision-state.json
.. note:: Use ``runbook`` as an alternative to ``clean_steps`` or
``service_steps``. If ``runbook`` is provided, ``clean_steps`` or
``service_steps`` must not be included in the request.
Set RAID Config
===============

View file

@ -61,7 +61,7 @@ and method.
This endpoint passes the request directly to the Node's hardware driver. The
HTTP BODY must be parseable JSON, which will be converted to parameters passed
to that function. Unparsable JSON, missing parameters, or excess parameters
to that function. Unparseable JSON, missing parameters, or excess parameters
will cause the request to be rejected with an HTTP 400 error.
Normal response code: 200 202

View file

@ -12,7 +12,7 @@ by accessing the Port resources under the ``/v1/ports`` endpoint.
List Ports by Node
==================
===================
.. rest_method:: GET /v1/nodes/{node_ident}/ports
@ -35,18 +35,6 @@ Return a list of bare metal Ports associated with ``node_ident``.
.. versionadded:: 1.53
Added the ``is_smartnic`` response fields.
.. versionadded:: 1.88
Added the ``name`` field.
.. versionadded:: 1.97
Added the ``description`` field.
.. versionadded:: 1.100
Added the ``vendor`` field.
.. versionadded:: 1.101
Added the ``category`` field.
Normal response code: 200
Error codes: TBD
@ -97,18 +85,6 @@ Return a detailed list of bare metal Ports associated with ``node_ident``.
.. versionadded:: 1.53
Added the ``is_smartnic`` response fields.
.. versionadded:: 1.88
Added the ``name`` field.
.. versionadded:: 1.97
Added the ``description`` field.
.. versionadded:: 1.100
Added the ``vendor`` field.
.. versionadded:: 1.101
Added the ``category`` field.
Normal response code: 200
Error codes: TBD
@ -134,7 +110,6 @@ Response
- uuid: uuid
- address: port_address
- node_uuid: node_uuid
- name: port_name
- local_link_connection: local_link_connection
- pxe_enabled: pxe_enabled
- physical_network: physical_network
@ -144,9 +119,6 @@ Response
- updated_at: updated_at
- links: links
- is_smartnic: is_smartnic
- description: port_description
- vendor: port_vendor
- category: port_category
**Example details of a Node's Ports:**

View file

@ -18,15 +18,6 @@ capable of running an Operating System. Each Node must be associated with a
the ``node_ident``. Responses clearly indicate whether a given field is a
``uuid`` or a ``name``.
.. versionchanged:: 1.91
In older API versions, we have a pecan feature enabled that strips .json
extensions from the end of a resource reference query and treat it as if it
was referenced by just its UUID or ``node_ident``. E.g.
``0178-0c2c-9c26-ca69-3011-a9dd.json``, is treated as
``0178-0c2c-9c26-ca69-3011-a9dd``. This feature is now disabled in newer API
versions.
Depending on the Roles assigned to the authenticated OpenStack User, and upon
the configuration of the Bare Metal service, API responses may change. For
example, the default value of the "show_password" settings cause all API
@ -116,15 +107,9 @@ supplied when the Node is created, or the resource may be updated later.
.. versionadded:: 1.82
Introduced the ``shard`` field.
.. versionadded:: 1.83
.. versionadded: 1.83
Introduced the ``parent_node`` field.
.. versionadded:: 1.95
Introduced the ``disable_power_off`` field.
.. versionadded:: 1.104
Introduced the ``instance_name`` field.
Normal response codes: 201
Error codes: 400,403,406
@ -138,7 +123,6 @@ Request
- conductor_group: req_conductor_group
- console_interface: req_console_interface
- deploy_interface: req_deploy_interface
- disable_power_off: req_disable_power_off
- driver_info: req_driver_info
- driver: req_driver_name
- extra: req_extra
@ -163,7 +147,6 @@ Request
- chassis_uuid: req_chassis_uuid
- instance_info: req_instance_info
- instance_uuid: req_instance_uuid
- instance_name: req_instance_name
- maintenance: req_maintenance
- maintenance_reason: maintenance_reason
- network_data: network_data
@ -186,7 +169,7 @@ and any defaults added for non-specified fields. Most fields default to "null"
or "".
The list and example below are representative of the response as of API
microversion 1.95.
microversion 1.81.
.. rest_parameters:: parameters.yaml
@ -207,7 +190,6 @@ microversion 1.95.
- properties: n_properties
- instance_info: instance_info
- instance_uuid: instance_uuid
- instance_name: instance_name
- chassis_uuid: chassis_uuid
- extra: extra
- console_enabled: console_enabled
@ -248,7 +230,6 @@ microversion 1.95.
- network_data: network_data
- retired: retired
- retired_reason: retired_reason
- disable_power_off: disable_power_off
**Example JSON representation of a Node:**
@ -319,9 +300,6 @@ provision state, and maintenance setting for each Node.
nodes to be enumerated, which are normally hidden as child nodes are not
normally intended for direct consumption by end users.
.. versionadded:: 1.104
Introduced the ``instance_name`` query parameter and response field.
Normal response codes: 200
Error codes: 400,403,406
@ -332,7 +310,6 @@ Request
.. rest_parameters:: parameters.yaml
- instance_uuid: r_instance_uuid
- instance_name: r_instance_name
- maintenance: r_maintenance
- associated: r_associated
- provision_state: r_provision_state
@ -421,10 +398,6 @@ Nova instance, eg. with a request to ``v1/nodes/detail?instance_uuid={NOVA INSTA
.. versionadded:: 1.82
Introduced the ``shard`` field. Introduced the ``sharded`` request parameter.
.. versionadded:: 1.104
Introduced the ``instance_name`` field.
Normal response codes: 200
Error codes: 400,403,406
@ -435,7 +408,6 @@ Request
.. rest_parameters:: parameters.yaml
- instance_uuid: r_instance_uuid
- instance_name: r_instance_name
- maintenance: r_maintenance
- fault: r_fault
- associated: r_associated
@ -476,7 +448,6 @@ Response
- properties: n_properties
- instance_info: instance_info
- instance_uuid: instance_uuid
- instance_name: instance_name
- chassis_uuid: chassis_uuid
- extra: extra
- console_enabled: console_enabled
@ -516,15 +487,6 @@ Response
- retired: retired
- retired_reason: retired_reason
- network_data: network_data
- automated_clean: automated_clean
- service_step: service_step
- firmware_interface: firmware_interface
- provision_updated_at: provision_updated_at
- inspection_started_at: inspection_started_at
- inspection_finished_at: inspection_finished_at
- created_at: created_at
- updated_at: updated_at
- disable_power_off: disable_power_off
**Example detailed list of Nodes:**
@ -583,12 +545,6 @@ only the specified set.
.. versionadded:: 1.83
Introduced the ``parent_node`` field.
.. versionadded:: 1.95
Introduced the ``disable_power_off`` field.
.. versionadded:: 1.104
Introduced the ``instance_name`` field.
Normal response codes: 200
Error codes: 400,403,404,406
@ -623,7 +579,6 @@ Response
- properties: n_properties
- instance_info: instance_info
- instance_uuid: instance_uuid
- instance_name: instance_name
- chassis_uuid: chassis_uuid
- extra: extra
- console_enabled: console_enabled
@ -660,7 +615,6 @@ Response
- conductor: conductor
- allocation_uuid: allocation_uuid
- network_data: network_data
- disable_power_off: disable_power_off
**Example JSON representation of a Node:**
@ -687,9 +641,6 @@ managed through sub-resources.
.. versionadded:: 1.82
Introduced the ability to set/unset a node's shard.
.. versionadded:: 1.104
Introduced the ability to set/unset node's instance_name.
Normal response codes: 200
Error codes: 400,403,404,406,409
@ -700,13 +651,6 @@ Request
The BODY of the PATCH request must be a JSON PATCH document, adhering to
`RFC 6902 <https://tools.ietf.org/html/rfc6902>`_.
.. note::
The ``instance_uuid`` field is an exception to the RFC 6902 behavior.
The "add" operator cannot replace an existing ``instance_uuid`` value.
Attempting to do so will result in a 409 Conflict error (NodeAssociated
exception). This protection prevents race conditions when multiple
Nova compute agents try to associate the same node.
.. rest_parameters:: parameters.yaml
- node_ident: node_ident
@ -737,7 +681,6 @@ Response
- properties: n_properties
- instance_info: instance_info
- instance_uuid: instance_uuid
- instance_name: instance_name
- chassis_uuid: chassis_uuid
- extra: extra
- console_enabled: console_enabled
@ -773,7 +716,6 @@ Response
- conductor: conductor
- allocation_uuid: allocation_uuid
- network_data: network_data
- disable_power_off: disable_power_off
**Example JSON representation of a Node:**

View file

@ -28,18 +28,6 @@ Response to include only the specified fields, rather than the default set.
.. versionadded:: 1.53
Added the ``is_smartnic`` response fields.
.. versionadded:: 1.88
Added the ``name`` field.
.. versionadded:: 1.97
Added the ``description`` field.
.. versionadded:: 1.100
Added the ``vendor`` field.
.. versionadded:: 1.101
Added the ``category`` field.
Normal response code: 200
Error codes: 400,401,403,404
@ -84,18 +72,6 @@ Return a detailed list of bare metal Ports associated with ``portgroup_ident``.
.. versionadded:: 1.53
Added the ``is_smartnic`` response fields.
.. versionadded:: 1.88
Added the ``name`` field.
.. versionadded:: 1.97
Added the ``description`` field.
.. versionadded:: 1.100
Added the ``vendor`` field.
.. versionadded:: 1.101
Added the ``category`` field.
Normal response code: 200
Error codes: 400,401,403,404
@ -130,10 +106,6 @@ Response
- updated_at: updated_at
- links: links
- is_smartnic: is_smartnic
- name: port_name
- description: port_description
- vendor: port_vendor
- category: port_category
**Example details of a Portgroup's Ports:**

View file

@ -32,10 +32,6 @@ By default, this query will return the UUID, name and address for each Portgroup
Added the ``detail`` boolean request parameter. When specified ``True`` this
causes the response to include complete details about each portgroup.
.. versionadded:: 1.99
Added the ability to filter portgroups based on the ``conductor_group`` of the
node they are associated with.
Normal response code: 200
Error codes: 400,401,403,404
@ -53,7 +49,6 @@ Request
- sort_dir: sort_dir
- sort_key: sort_key
- detail: detail
- conductor_group: r_conductor_group_port
Response
--------
@ -82,12 +77,6 @@ Creates a new Portgroup resource.
This method requires a Node UUID and the physical hardware address for the
Portgroup (MAC address in most cases).
.. versionadded:: 1.102
Added the ``physical_network`` field.
.. versionadded:: 1.103
Added the ``category`` field.
Normal response code: 201
Error codes: 400,401,403,404
@ -105,8 +94,6 @@ Request
- properties: req_portgroup_properties
- extra: req_extra
- uuid: req_uuid
- physical_network: req_physical_network
- category : req_portgroup_category
**Example Portgroup creation request:**
@ -131,8 +118,6 @@ Response
- updated_at: updated_at
- links: links
- ports: pg_ports
- physical_network: physical_network
- category : portgroup_category
**Example Portgroup creation response:**
@ -147,12 +132,6 @@ List Detailed Portgroups
Return a list of bare metal Portgroups, with detailed information.
.. versionadded:: 1.102
Added the ``physical_network`` field.
.. versionadded:: 1.103
Added the ``category`` field.
Normal response code: 200
Error codes: 400,401,403,404
@ -188,8 +167,6 @@ Response
- updated_at: updated_at
- links: links
- ports: pg_ports
- physical_network: physical_network
- category: portgroup_category
**Example detailed Portgroup list response:**
@ -204,12 +181,6 @@ Show Portgroup Details
Show details for the given Portgroup.
.. versionadded:: 1.102
Added the ``physical_network`` field.
.. versionadded:: 1.103
Added the ``category`` field.
Normal response code: 200
Error codes: 400,401,403,404
@ -240,8 +211,6 @@ Response
- updated_at: updated_at
- links: links
- ports: pg_ports
- physical_network: physical_network
- category: portgroup_category
**Example Portgroup details:**
@ -256,12 +225,6 @@ Update a Portgroup
Update a Portgroup.
.. versionadded:: 1.102
Added the ``physical_network`` field.
.. versionadded:: 1.103
Added the ``category`` field.
Normal response code: 200
Error codes: 400,401,403,404
@ -299,8 +262,6 @@ Response
- updated_at: updated_at
- links: links
- ports: pg_ports
- physical_network: physical_network
- category: portgroup_category
**Example Portgroup update response:**

View file

@ -50,21 +50,8 @@ By default, this query will return the uuid and address for each Port.
Added the ``is_smartnic`` field.
.. versionadded:: 1.82
Added the ability to filter ports based on the ``shard`` of the node they
are associated with.
.. versionadded:: 1.97
Added the ``description`` field.
.. versionadded:: 1.99
Added the ability to filter ports based on the ``conductor_group`` of the
node they are associated with.
.. versionadded:: 1.100
Added the ``vendor`` field.
.. versionadded:: 1.101
Added the ``category`` field.
Added the ability to filter ports based on the shard of the node they are
associated with.
Normal response code: 200
@ -78,7 +65,6 @@ Request
- portgroup: r_port_portgroup_ident
- address: r_port_address
- shard: r_port_shard
- conductor_group: r_conductor_group_port
- fields: fields
- limit: limit
- marker: marker
@ -128,23 +114,6 @@ This method requires a Node UUID and the physical hardware address for the Port
.. versionadded:: 1.88
Added the ``name`` field.
.. versionadded:: 1.90
``local_link_connection`` fields now accepts a dictionary
of ``vtep-logical-switch``, ``vtep-physical-switch`` and ``port_id``
to identify ovn vtep switches.
.. versionadded:: 1.94
Added support to create ports passing in either the node name or UUID.
.. versionadded:: 1.97
Added the ``description`` field.
.. versionadded:: 1.100
Added the ``vendor`` field.
.. versionadded:: 1.101
Added the ``category`` field.
Normal response code: 201
Request
@ -152,7 +121,7 @@ Request
.. rest_parameters:: parameters.yaml
- node_ident: node_ident
- node_uuid: req_node_uuid
- address: req_port_address
- portgroup_uuid: req_portgroup_uuid
- name: req_port_name
@ -162,12 +131,6 @@ Request
- extra: req_extra
- is_smartnic: req_is_smartnic
- uuid: req_uuid
- description: req_port_description
- vendor: req_port_vendor
- category: req_port_category
.. note::
Either `node_ident` or `node_uuid` is a valid parameter.
**Example Port creation request:**
@ -193,9 +156,6 @@ Response
- updated_at: updated_at
- links: links
- is_smartnic: is_smartnic
- description: port_description
- vendor: port_vendor
- category: port_category
**Example Port creation response:**
@ -234,15 +194,6 @@ Return a list of bare metal Ports, with detailed information.
.. versionadded:: 1.88
Added the ``name`` field.
.. versionadded:: 1.97
Added the ``description`` field.
.. versionadded:: 1.100
Added the ``vendor`` field.
.. versionadded:: 1.101
Added the ``category`` field.
Normal response code: 200
Request
@ -280,9 +231,6 @@ Response
- updated_at: updated_at
- links: links
- is_smartnic: is_smartnic
- description: port_description
- vendor: port_vendor
- category: port_category
**Example detailed Port list response:**
@ -315,16 +263,7 @@ Show details for the given Port.
Added the ``is_smartnic`` response fields.
.. versionadded:: 1.88
Added the ``name`` field.
.. versionadded:: 1.97
Added the ``description`` field.
.. versionadded:: 1.100
Added the ``vendor`` field.
.. versionadded:: 1.101
Added the ``category`` field.
Added the ``name``
Normal response code: 200
@ -355,9 +294,6 @@ Response
- updated_at: updated_at
- links: links
- is_smartnic: is_smartnic
- description: port_description
- vendor: port_vendor
- category: port_category
**Example Port details:**
@ -385,21 +321,7 @@ Update a Port.
Added the ``is_smartnic`` fields.
.. versionadded:: 1.88
Added the ``name`` field.
.. versionadded:: 1.90
``local_link_connection`` fields now accepts a dictionary
of ``vtep-logical-switch``, ``vtep-physical-switch`` and ``port_id``
to identify ovn vtep switches.
.. versionadded:: 1.97
Added the ``description`` field.
.. versionadded:: 1.100
Added the ``vendor`` field.
.. versionadded:: 1.101
Added the ``category`` field.
Added the ``name``
Normal response code: 200
@ -438,9 +360,6 @@ Response
- updated_at: updated_at
- links: links
- is_smartnic: is_smartnic
- description: port_description
- vendor: port_vendor
- category: port_category
**Example Port update response:**

View file

@ -1,245 +0,0 @@
.. -*- rst -*-
===================
Runbooks (runbooks)
===================
The Runbook resource represents a collection of steps that define a
series of actions to be executed on a node. Runbooks enable users to perform
complex operations in a predefined, automated manner. A runbook is
matched for a node if the runbook's name matches a trait in the node.
.. versionadded:: 1.92
Runbook API was introduced.
Create Runbook
==============
.. rest_method:: POST /v1/runbooks
Creates a runbook.
.. versionadded:: 1.92
Runbook API was introduced.
Normal response codes: 201
Error response codes: 400, 401, 403, 409
Request
-------
.. rest_parameters:: parameters.yaml
- name: runbook_name
- steps: runbook_steps
- disable_ramdisk: req_disable_ramdisk
- uuid: req_uuid
- extra: req_extra
Request Runbook Step
--------------------
.. rest_parameters:: parameters.yaml
- interface: runbook_step_interface
- step: runbook_step_step
- args: runbook_step_args
- order: runbook_step_order
Request Example
---------------
.. literalinclude:: samples/runbook-create-request.json
:language: javascript
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- uuid: uuid
- name: runbook_name
- steps: runbook_steps
- disable_ramdisk: disable_ramdisk
- extra: extra
- public: runbook_public
- owner: runbook_owner
- created_at: created_at
- updated_at: updated_at
- links: links
Response Example
----------------
.. literalinclude:: samples/runbook-create-response.json
:language: javascript
List Runbooks
=============
.. rest_method:: GET /v1/runbooks
Lists all runbooks.
.. versionadded:: 1.92
Runbook API was introduced.
Normal response codes: 200
Error response codes: 400, 401, 403, 404
Request
-------
.. rest_parameters:: parameters.yaml
- fields: fields
- limit: limit
- marker: marker
- sort_dir: sort_dir
- sort_key: sort_key
- detail: detail
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- uuid: uuid
- name: runbook_name
- disable_ramdisk: disable_ramdisk
- steps: runbook_steps
- extra: extra
- public: runbook_public
- owner: runbook_owner
- created_at: created_at
- updated_at: updated_at
- links: links
Response Example
----------------
**Example runbook list response:**
.. literalinclude:: samples/runbook-list-response.json
:language: javascript
**Example detailed runbook list response:**
.. literalinclude:: samples/runbook-detail-response.json
:language: javascript
Show Runbook Details
====================
.. rest_method:: GET /v1/runbooks/{runbook_id}
Shows details for a runbook.
.. versionadded:: 1.92
Runbook API was introduced.
Normal response codes: 200
Error response codes: 400, 401, 403, 404
Request
-------
.. rest_parameters:: parameters.yaml
- fields: fields
- runbook_id: runbook_ident
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- uuid: uuid
- name: runbook_name
- steps: runbook_steps
- disable_ramdisk: disable_ramdisk
- extra: extra
- public: runbook_public
- owner: runbook_owner
- created_at: created_at
- updated_at: updated_at
- links: links
Response Example
----------------
.. literalinclude:: samples/runbook-show-response.json
:language: javascript
Update a Runbook
================
.. rest_method:: PATCH /v1/runbooks/{runbook_id}
Update a runbook.
.. versionadded:: 1.92
Runbook API was introduced.
Normal response code: 200
Error response codes: 400, 401, 403, 404, 409
Request
-------
The BODY of the PATCH request must be a JSON PATCH document, adhering to
`RFC 6902 <https://tools.ietf.org/html/rfc6902>`_.
Request
-------
.. rest_parameters:: parameters.yaml
- runbook_id: runbook_ident
.. literalinclude:: samples/runbook-update-request.json
:language: javascript
Response
--------
.. rest_parameters:: parameters.yaml
- uuid: uuid
- name: runbook_name
- steps: runbook_steps
- disable_ramdisk: disable_ramdisk
- extra: extra
- public: runbook_public
- owner: runbook_owner
- created_at: created_at
- updated_at: updated_at
- links: links
.. literalinclude:: samples/runbook-update-response.json
:language: javascript
Delete Runbook
==============
.. rest_method:: DELETE /v1/runbooks/{runbook_id}
Deletes a runbook.
.. versionadded:: 1.92
Runbook API was introduced.
Normal response codes: 204
Error response codes: 400, 401, 403, 404
Request
-------
.. rest_parameters:: parameters.yaml
- runbook_id: runbook_ident

View file

@ -12,15 +12,10 @@ supports versioning. There are two kinds of versions in Ironic.
- ''major versions'', which have dedicated urls.
- ''microversions'', which can be requested through the use of the
``X-OpenStack-Ironic-API-Version`` header or the new standard singular header
``OpenStack-API-Version: baremetal <version>``.
``X-OpenStack-Ironic-API-Version`` header.
The Version APIs work differently from other APIs as they *do not* require authentication.
Upon the Dalmatian release, all API requests support the
new standard singular header ``OpenStack-API-Version: baremetal <version>``.
If that's not present, we fall back to the legacy headers.
Beginning with the Kilo release, all API requests support the
``X-OpenStack-Ironic-API-Version`` header. This header SHOULD be supplied
with every request; in the absence of this header, each request is treated
@ -84,13 +79,9 @@ Response Example
- id: id
- links: links
- openstack-request-id: openstack-request-id
- x-openstack-request-id: x-openstack-request-id
- x-openstack-ironic-api-version: header_version
- x-openstack-ironic-api-min-version: x-openstack-ironic-api-min-version
- x-openstack-ironic-api-max-version: x-openstack-ironic-api-max-version
.. literalinclude:: samples/api-v1-root-response.json
:language: javascript
.. versionadded:: 1.107
Added the ``X-OpenStack-Request-Id`` header.

View file

@ -9,7 +9,6 @@
.. include:: baremetal-api-versions.inc
.. include:: baremetal-api-v1-nodes.inc
.. include:: baremetal-api-v1-node-management.inc
.. include:: baremetal-api-v1-attach-detach-vmedia.inc
.. include:: baremetal-api-v1-node-passthru.inc
.. include:: baremetal-api-v1-nodes-traits.inc
.. include:: baremetal-api-v1-nodes-vifs.inc
@ -29,11 +28,9 @@
.. include:: baremetal-api-v1-allocation.inc
.. include:: baremetal-api-v1-node-allocation.inc
.. include:: baremetal-api-v1-deploy-templates.inc
.. include:: baremetal-api-v1-runbooks.inc
.. include:: baremetal-api-v1-nodes-history.inc
.. include:: baremetal-api-v1-nodes-inventory.inc
.. include:: baremetal-api-v1-shards.inc
.. include:: baremetal-api-v1-inspection-rules.inc
.. NOTE(dtantsur): keep chassis close to the end since it's semi-deprecated
.. include:: baremetal-api-v1-chassis.inc
.. NOTE(dtantsur): keep misc last, since it covers internal API

View file

@ -36,12 +36,6 @@ x-openstack-ironic-api-version:
in: header
required: true
type: string
x-openstack-request-id:
description: >
This mirrors the ``openstack-request-id`` header.
in: header
required: false
type: string
# variables in path
allocation_ident:
@ -122,12 +116,6 @@ portgroup_ident:
in: path
required: true
type: string
runbook_ident:
description: |
The UUID or name of the runbook.
in: path
required: true
type: string
trait:
description: |
A single trait for this node.
@ -299,29 +287,12 @@ r_conductor_group:
in: query
required: false
type: string
r_conductor_group_port:
description: |
Filter the list of returned ports or portgroups, and only return those with
the specified ``conductor_group`` or an empty set if none found. List of
case-insensitive strings up to 255 characters, containing ``a-z``, ``0-9``,
``_``, ``-``, and ``.``. This cannot be used if ``node``, ``node_uuid``,
``portgroup`` or ``address`` is specified.
For example, the following request returns only the ports for nodes
in conductor groups ``bear`` and ``metal``:
::
GET /v1/ports?conductor_groups=bear,metal
in: query
required: false
type: array
r_description_contains:
description: |
Filter the list of returned nodes, and only return those containing
substring specified by ``description_contains``.
in: query
required: false
requred: false
type: string
r_driver:
description: |
@ -338,13 +309,6 @@ r_fault:
in: query
required: false
type: string
r_instance_name:
description: |
Filter the list of returned nodes, and only return the node with this
specific instance name, or an empty set if not found.
in: query
required: false
type: string
r_instance_uuid:
description: |
Filter the list of returned nodes, and only return the node with this
@ -581,7 +545,7 @@ bios_interface:
type: string
bios_setting_allowable_values:
description: |
A list of allowable values. May be ``null``.
A list of allowable values, otherwise ``null``.
in: body
required: true
type: array
@ -611,19 +575,21 @@ bios_setting_min_length:
type: integer
bios_setting_name:
description: |
The name of a Bios setting for a Node, eg. ``virtualization``.
The name of a Bios setting for a Node, eg. "virtualization".
in: body
required: true
type: string
bios_setting_read_only:
description: |
This Bios setting is read only and can't be changed. May be ``null``.
This Bios seting is read only and can't be changed.
May be None.
in: body
required: true
type: boolean
bios_setting_reset_required:
description: |
After setting this Bios setting a node reboot is required. May be ``null``.
After setting this Bios setting a node reboot is required.
May be None.
in: body
required: true
type: boolean
@ -641,7 +607,7 @@ bios_setting_upper_bound:
type: integer
bios_setting_value:
description: |
The value of a Bios setting for a Node, eg. "on". May be ``null``.
The value of a Bios setting for a Node, eg. "on".
in: body
required: true
type: string
@ -922,15 +888,6 @@ description:
in: body
required: true
type: string
disable_power_off:
description: |
If set to true, power off for the node is explicitly disabled, instead, a
reboot will be used in place of power on/off. Additionally, when possible,
the node will be disabled (i.e., its API agent will be rendered unusable
and network configuration will be removed) instead of being powered off.
in: body
required: false
type: boolean
disable_ramdisk:
description: |
If set to ``true``, the ironic-python-agent ramdisk will not be booted for
@ -1097,12 +1054,6 @@ firmware_components:
in: body
required: true
type: array
firmware_interface:
description: |
Firmware interface for a node, e.g. “redfish”.
in: body
required: true
type: string
history_event:
description: |
The event message body which has been logged related to the node for
@ -1184,106 +1135,6 @@ inspection_finished_at:
in: body
required: true
type: string
inspection_rule_action_args:
description: |
A list (in the sense of Python ``*args``)
or a dict (in the sense of Python ``**kwargs``) with arguments for
the action operator.
in: body
required: true
type: array
inspection_rule_action_loop:
description: |
This is an Ansible-style loop field. It contains a list or dictionary
of items to iterate over for the same action.
in: body
required: false
type: array
inspection_rule_action_op:
description: |
The operator to execute with specified arguments when conditions are met.
in: body
required: true
type: string
inspection_rule_actions:
description: |
A list of actions to run during inspection. An action is a dictionary
or list, with required keys 'op' and 'args', and optional key 'loop'.
in: body
required: true
type: array
inspection_rule_condition_args:
description: |
A list (in the sense of Python ``*args``)
or a dict (in the sense of Python ``**kwargs``) with arguments for
the condition operator.
in: body
required: true
type: array
inspection_rule_condition_loop:
description: |
This is an Ansible-style loop field. It contains a list or dictionary
of items to iterate over for the same condition.
in: body
required: false
type: array
inspection_rule_condition_multiple:
description: |
Determines how the results of all loop iterations are combined, whether
a condition is returned as true if 'any' check passes,
or only when 'all'; the 'first', or the 'last' check is true.
in: body
required: false
type: string
inspection_rule_condition_op:
description: |
The operator to run conditions by, with specified arguments.
in: body
required: true
type: string
inspection_rule_conditions:
description: |
A list of conditions to check before applying the rule. A
condition is a dictionary or list, with required keys 'op' and 'args', and
optional keys 'loop' and 'multiple'.
in: body
required: false
type: array
inspection_rule_description:
description: |
Informational text about this rule.
in: body
required: false
type: string
inspection_rule_ident:
description: |
The UUID of the inspection rule.
in: body
required: false
type: string
inspection_rule_phase:
description: |
Specifies the phase when the rule should run, defaults to 'main'.
in: body
required: false
type: string
inspection_rule_priority:
description: |
A non-negative integer priority for the rule. Specifies the rule's
precedence level during execution. Priorities between 0 and 9999 can be
used by all rules, negative value and values above 10000 are reserved for
built-in rules. The default priority is 0.
in: body
required: false
type: int
inspection_rule_sensitive:
description: |
Indicates whether the rule contains sensitive information. A sensitive
rule will also have the ability to see sensitive fields on inspection
data.
in: body
required: false
type: string
inspection_started_at:
description: |
The UTC date and time when the hardware inspection was started,
@ -1301,24 +1152,9 @@ instance_info:
in: body
required: true
type: JSON
instance_name:
description: |
A human-readable name for the instance deployed on this node. This is
automatically synchronized with the ``display_name`` from the node's
``instance_info`` for backward compatibility with Nova.
in: body
required: false
type: string
instance_uuid:
description: |
UUID of the Nova instance associated with this Node.
.. note::
This field does not follow standard JSON PATCH RFC 6902 behavior.
The "add" operator cannot replace an existing instance_uuid value.
Attempting to do so will result in a 409 Conflict error (NodeAssociated
exception). This protection prevents race conditions when multiple
Nova compute agents try to associate the same node.
in: body
required: true
type: string
@ -1445,8 +1281,8 @@ n_ports:
type: array
n_properties:
description: |
Physical characteristics of this Node. Populated during inspection. May be
edited via the REST API at any time.
Physical characteristics of this Node. Populated by ironic-inspector during
inspection. May be edited via the REST API at any time.
in: body
required: true
type: JSON
@ -1584,8 +1420,8 @@ pg_ports:
type: array
physical_network:
description: |
The name of the physical network to which a port or portgroup is
connected. May be empty.
The name of the physical network to which a port is connected. May be
empty.
in: body
required: true
type: string
@ -1596,30 +1432,12 @@ port_address:
in: body
required: true
type: string
port_category:
description: |
Category of the network Port. Helps to further differentiate the Port.
in: body
required: false
type: string
port_description:
description: |
Descriptive text about the network Port.
in: body
required: false
type: string
port_name:
description: |
The name assigned to the network Port.
in: body
required: false
type: string
port_vendor:
description: |
Name of the hardware vendor of the network Port.
in: body
required: false
type: string
portgroup_address:
description: |
Physical hardware address of this Portgroup, typically the hardware
@ -1627,13 +1445,6 @@ portgroup_address:
in: body
required: false
type: string
portgroup_category:
description: |
Category of the network Portgroup. Helps to further differentiate the
Portgroup.
in: body
required: false
type: string
portgroup_internal_info:
description: |
Internal metadata set and stored by the Portgroup. This field is read-only.
@ -1860,22 +1671,6 @@ req_description:
in: body
required: false
type: string
req_disable_power_off:
description: |
If set to ``true``, power off for the node is explicitly disabled, instead, a
reboot will be used in place of power on/off. Additionally, when possible,
the node will be disabled (i.e., its API agent will be rendered unusable
and network configuration will be removed) instead of being powered off.
in: body
required: false
type: boolean
req_disable_ramdisk:
description: |
Whether to boot ramdisk while using a runbook for cleaning or servicing
operation.
in: body
required: false
type: boolean
req_driver_info:
description: |
All the metadata required by the driver to manage this Node. List of fields
@ -1903,12 +1698,6 @@ req_inspect_interface:
in: body
required: false
type: string
req_inspection_rule_phase:
description: |
Specifies the phase when the rule should run, defaults to 'main'.
in: body
required: false
type: string
req_instance_info:
description: |
Information used to customize the deployed image. May include root partition
@ -1918,24 +1707,9 @@ req_instance_info:
in: body
required: false
type: JSON
req_instance_name:
description: |
A human-readable name for the instance deployed on this node. This is
automatically synchronized with the ``display_name`` from the node's
``instance_info`` for backward compatibility with Nova.
in: body
required: false
type: string
req_instance_uuid:
description: |
UUID of the Nova instance associated with this Node.
.. note::
This field does not follow standard JSON PATCH RFC 6902 behavior.
The "add" operator cannot replace an existing instance_uuid value.
Attempting to do so will result in a 409 Conflict error (NodeAssociated
exception). This protection prevents race conditions when multiple
Nova compute agents try to associate the same node.
in: body
required: false
type: string
@ -2019,8 +1793,8 @@ req_persistent:
type: boolean
req_physical_network:
description: |
The name of the physical network to which a port or portgroup is connected.
May be empty.
The name of the physical network to which a port is connected. May be
empty.
in: body
required: false
type: string
@ -2031,30 +1805,12 @@ req_port_address:
in: body
required: true
type: string
req_port_category:
description: |
Category of the network Port. Helps to further differentiate the Port.
in: body
required: false
type: string
req_port_description:
description: |
Descriptive text about the network Port.
in: body
required: false
type: string
req_port_name:
description: |
The name assigned to the network Port.
in: body
required: false
type: string
req_port_vendor:
description: |
Name of the hardware vendor of the network Port.
in: body
required: false
type: string
req_portgroup_address:
description: |
Physical hardware address of this Portgroup, typically the hardware
@ -2062,13 +1818,6 @@ req_portgroup_address:
in: body
required: false
type: string
req_portgroup_category:
description: |
Category of the network Portgroup. Helps to further differentiate the
Portgroup.
in: body
required: false
type: string
req_portgroup_mode:
description: |
Mode of the port group. For possible values, refer to
@ -2275,83 +2024,11 @@ retired_reason:
in: body
required: false
type: string
runbook_name:
description: |
The unique name of the runbook. It must be prefixed with ``CUSTOM_``,
which makes it conform to the TRAITS_SCHEMA format. The runbook name must
match a node trait indicating it can run on a node.
in: body
required: true
type: string
runbook_owner:
description: |
The unique identifier of the runbook owner.
This must be ``null`` if ``runbook_public`` is ``true``.
in: body
required: false
type: string
runbook_public:
description: |
Indicates whether a runbook is available for public use or not.
This must be ``false`` if ``runbook_owner`` is not ``null``.
in: body
required: false
type: boolean
runbook_step_args:
description: |
A dictionary of arguments that are passed to the runbook step method.
in: body
required: true
type: object
runbook_step_interface:
description: |
The name of the driver interface.
in: body
required: true
type: string
runbook_step_order:
description: |
A non-negative integer order for the step.
in: body
required: true
type: integer
runbook_step_step:
description: |
The name of the runbook step method on the driver interface.
in: body
required: true
type: string
runbook_steps:
description: |
The runbook steps of the runbook template. Must be a list of dictionaries
containing at least one runbook step. See `Request Runbook Step`_ for step
parameters.
in: body
required: true
type: array
secure_boot:
description: |
Indicates whether node is currently booted with secure_boot turned on.
in: body
type: boolean
service_step:
description: |
A dictionary containing the interface and step to be executed on the node.
The dictionary must contain the keys 'interface' and 'step'. If specified,
the value for 'args' is a keyword variable argument dictionary that is
passed to the cleaning step method.
in: body
required: true
type: JSON
service_steps:
description: |
An ordered list of service steps that will be performed on the node. A
cleaning step is a dictionary with required keys 'interface' and 'step', and
optional key 'args'. If specified, the value for 'args' is a keyword variable
argument dictionary that is passed to the cleaning step method.
in: body
required: false
type: array
shard:
description: |
A string indicating the shard this node belongs to.

View file

@ -1,34 +0,0 @@
{
"description": "BMC credentials",
"phase": "main",
"priority": 100,
"sensitive": true,
"conditions": [
{
"op": "contains",
"args": {"value": "{inventory[system_vendor][manufacturer]}", "regex": "(?i)dell"}
},
{
"op": "is-true",
"args": {"value": "{node.auto_discovered}"}
}
],
"actions": [
{
"op": "set-attribute",
"args": {"path": "/driver", "value": "idrac"}
},
{
"op": "set-attribute",
"args": {"path": "driver_info.redfish_address", "value": "https://{inventory[bmc_address]}"}
},
{
"op": "set-attribute",
"args": {"path": "/driver_info/redfish_username", "value": "admin"}
},
{
"op": "set-attribute",
"args": {"path": "/driver_info/redfish_password", "value": "password"}
}
]
}

View file

@ -1,21 +0,0 @@
{
"created_at": "2025-03-18T22:28:48.643434+11:11",
"description": "BMC credentials",
"phase": "main",
"priority": 100,
"sensitive": true,
"actions": null,
"conditions": null,
"links": [
{
"href": "http://10.60.253.180:6385/v1/inspection_rules/783bf33a-a8e3-1e23-a645-1e95a1f95186",
"rel": "self"
},
{
"href": "http://10.60.253.180:6385/inspection_rules/783bf33a-a8e3-1e23-a645-1e95a1f95186",
"rel": "bookmark"
}
],
"updated_at": null,
"uuid": "783bf33a-a8e3-1e23-a645-1e95a1f95186"
}

View file

@ -1,43 +0,0 @@
{
"inspection_rules": [
{
"created_at": "2025-03-14T15:37:29.542187+00:00",
"description": "Set properties on discovered data",
"phase": "main",
"priority": 50,
"sensitive": false,
"conditions": [
{
"op": "is-true",
"args": {"value": "{inventory[cpu][count]}"}
}
],
"actions": [
{
"op": "set-attribute",
"args": {"path": "/properties/cpus", "value": "{inventory[cpu][count]}"}
},
{
"op": "set-attribute",
"args": {"path": "/properties/memory_mb", "value": "{inventory[memory][physical_mb]}"}
},
{
"op": "set-attribute",
"args": {"path": "/properties/cpu_arch", "value": "{inventory[cpu][architecture]}"}
}
],
"links": [
{
"href": "http://10.60.253.180:6385/v1/inspection_rules/75a6c1f7-2de0-47b3-9c54-8e6ef3a27bcd",
"rel": "self"
},
{
"href": "http://10.60.253.180:6385/inspection_rules/75a6c1f7-2de0-47b3-9c54-8e6ef3a27bcd",
"rel": "bookmark"
}
],
"updated_at": null,
"uuid": "783bf33a-a8e3-1e23-a645-1e95a1f95186"
}
]
}

View file

@ -1,55 +0,0 @@
{
"inspection_rules": [
{
"description": "BMC credentials",
"phase": "main",
"priority": 100,
"sensitive": true,
"links": [
{
"href": "http://10.60.253.180:6385/v1/inspection_rules/783bf33a-a8e3-1e23-a645-1e95a1f95186",
"rel": "self"
},
{
"href": "http://10.60.253.180:6385/inspection_rules/783bf33a-a8e3-1e23-a645-1e95a1f95186",
"rel": "bookmark"
}
],
"uuid": "783bf33a-a8e3-1e23-a645-1e95a1f95186"
},
{
"description": "Set properties on discovered data",
"phase": "main",
"priority": 50,
"sensitive": false,
"links": [
{
"href": "http://10.60.253.180:6385/v1/inspection_rules/1f3ee449-08cd-9e3f-e1e5-9cfda674081a",
"rel": "self"
},
{
"href": "http://10.60.253.180:6385/inspection_rules/1f3ee449-08cd-9e3f-e1e5-9cfda674081a",
"rel": "bookmark"
}
],
"uuid": "1f3ee449-08cd-9e3f-e1e5-9cfda674081a"
},
{
"description": "Memory systems",
"phase": "main",
"priority": 0,
"sensitive": false,
"links": [
{
"href": "http://10.60.253.180:6385/v1/inspection_rules/210055f4-7367-ff8d-ae42-f4f9e8e85e8a",
"rel": "self"
},
{
"href": "http://10.60.253.180:6385/inspection_rules/210055f4-7367-ff8d-ae42-f4f9e8e85e8a",
"rel": "bookmark"
}
],
"uuid": "210055f4-7367-ff8d-ae42-f4f9e8e85e8a"
}
]
}

View file

@ -1,39 +0,0 @@
{
"created_at": "2025-03-18T22:28:48.643434+11:11",
"description": "Set properties on discovered data",
"phase": "main",
"priority": 50,
"sensitive": false,
"conditions": [
{
"op": "is-true",
"args": {"value": "{inventory[cpu][count]}"}
}
],
"actions": [
{
"op": "set-attribute",
"args": {"path": "/properties/cpus", "value": "{inventory[cpu][count]}"}
},
{
"op": "set-attribute",
"args": {"path": "/properties/memory_mb", "value": "{inventory[memory][physical_mb]}"}
},
{
"op": "set-attribute",
"args": {"path": "/properties/cpu_arch", "value": "{inventory[cpu][architecture]}"}
}
],
"links": [
{
"href": "http://10.60.253.180:6385/v1/inspection_rules/1f3ee449-08cd-9e3f-e1e5-9cfda674081a",
"rel": "self"
},
{
"href": "http://10.60.253.180:6385/inspection_rules/1f3ee449-08cd-9e3f-e1e5-9cfda674081a",
"rel": "bookmark"
}
],
"updated_at": null,
"uuid": "1f3ee449-08cd-9e3f-e1e5-9cfda674081a"
}

View file

@ -1,28 +0,0 @@
[
{
"path": "/description",
"value": "Updated rule for setting hardware properties",
"op": "replace"
},
{
"path": "/priority",
"value": 75,
"op": "replace"
},
{
"path": "/conditions/0",
"value": {
"op": "is-true",
"args": {"value": "{inventory[cpu][count]}"}
},
"op": "replace"
},
{
"path": "/actions/-",
"value": {
"op": "set-attribute",
"args": {"path": "/properties/local_gb", "value": "{inventory[disks][0][size]}"}
},
"op": "add"
}
]

View file

@ -1,43 +0,0 @@
{
"created_at": "2025-03-23T22:28:48.643434+11:11",
"description": "Updated rule for setting hardware properties",
"phase": "main",
"priority": 75,
"sensitive": false,
"conditions": [
{
"op": "is-true",
"args": {"value": "{inventory[cpu][count]}"}
}
],
"actions": [
{
"op": "set-attribute",
"args": {"path": "/properties/cpus", "value": "{inventory[cpu][count]}"}
},
{
"op": "set-attribute",
"args": {"path": "/properties/memory_mb", "value": "{inventory[memory][physical_mb]}"}
},
{
"op": "set-attribute",
"args": {"path": "/properties/cpu_arch", "value": "{inventory[cpu][architecture]}"}
},
{
"op": "set-attribute",
"args": {"path": "/properties/local_gb", "value": "{inventory[disks][0][size]}"}
}
],
"links": [
{
"href": "http://10.60.253.180:6385/v1/inspection_rules/1f3ee449-08cd-9e3f-e1e5-9cfda674081a",
"rel": "self"
},
{
"href": "http://10.60.253.180:6385/inspection_rules/1f3ee449-08cd-9e3f-e1e5-9cfda674081a",
"rel": "bookmark"
}
],
"uuid": "1f3ee449-08cd-9e3f-e1e5-9cfda674081a",
"updated_at": "2025-03-24T11:42:18.763029+00:00"
}

View file

@ -16,12 +16,12 @@
"value": "Enabled",
"attribute_type": "Enumeration",
"allowable_values": ["Enabled", "Disabled"],
"lower_bound": null,
"max_length": null,
"min_length": null,
"lower_bound": None,
"max_length": None,
"min_length": None,
"read_only": false,
"reset_required": null,
"unique": null,
"upper_bound": null
"reset_required": None,
"unique": None,
"upper_bound": None
}
}

View file

@ -17,13 +17,13 @@
"value": "Enabled",
"attribute_type": "Enumeration",
"allowable_values": ["Enabled", "Disabled"],
"lower_bound": null,
"max_length": null,
"min_length": null,
"lower_bound": None,
"max_length": None,
"min_length": None,
"read_only": false,
"reset_required": null,
"unique": null,
"upper_bound": null
"reset_required": None,
"unique": None,
"upper_bound": None
}
]
}

View file

@ -22,7 +22,6 @@
"inspection_started_at": null,
"instance_info": {},
"instance_uuid": null,
"instance_name": null,
"last_error": null,
"lessee": null,
"links": [

View file

@ -4,11 +4,13 @@
"name": "system",
"links": [
{
"href": "http://127.0.0.1:6385/v1/nodes/Compute0/management/indicators/system",
"href": "http://127.0.0.1:6385/v1/nodes/Compute0/
management/indicators/system",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/nodes/Compute0/management/indicators/system",
"href": "http://127.0.0.1:6385/nodes/Compute0/
management/indicators/system",
"rel": "bookmark"
}
]
@ -17,11 +19,13 @@
"name": "chassis",
"links": [
{
"href": "http://127.0.0.1:6385/v1/nodes/Compute0/management/indicators/chassis",
"href": "http://127.0.0.1:6385/v1/nodes/Compute0/
management/indicators/chassis",
"rel": "self"
},
{
"href": "http://127.0.0.1:6385/nodes/Compute0/management/indicators/chassis",
"href": "http://127.0.0.1:6385/nodes/Compute0/
management/indicators/chassis",
"rel": "bookmark"
}
]

View file

@ -23,8 +23,6 @@
},
"node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d",
"physical_network": "physnet1",
"vendor": "splitrock",
"category": "hupernet",
"portgroup_uuid": "e43c722c-248e-4c6e-8ce8-0d8ff129387a",
"pxe_enabled": true,
"updated_at": "2016-08-18T22:28:49.653974+00:00",

View file

@ -1,4 +0,0 @@
{
"target": "clean",
"runbook": "runbook_ident"
}

View file

@ -1,12 +0,0 @@
{
"target":"service",
"sevice_steps": [
{
"interface": "raid",
"step": "apply_configuration",
"args": {
"create_nonroot_volumes": "True"
}
}
]
}

View file

@ -25,7 +25,6 @@
"inspection_started_at": null,
"instance_info": {},
"instance_uuid": null,
"instance_name": null,
"last_error": null,
"lessee": null,
"links": [

View file

@ -26,7 +26,6 @@
"inspection_started_at": null,
"instance_info": {},
"instance_uuid": null,
"instance_name": null,
"last_error": null,
"lessee": null,
"links": [

View file

@ -27,7 +27,6 @@
"inspection_started_at": null,
"instance_info": {},
"instance_uuid": "5344a3e2-978a-444e-990a-cbf47c62ef88",
"instance_name": "my-test-instance",
"last_error": null,
"lessee": null,
"links": [
@ -134,7 +133,6 @@
"inspection_started_at": null,
"instance_info": {},
"instance_uuid": null,
"instance_name": null,
"last_error": null,
"lessee": null,
"links": [

View file

@ -1,10 +1,7 @@
{
"node_ident": "6d85703a-565d-469a-96ce-30b6de53079d",
"node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d",
"portgroup_uuid": "e43c722c-248e-4c6e-8ce8-0d8ff129387a",
"name": "port1",
"description": "Physical Network",
"vendor": "splitrock",
"category": "hypernet",
"address": "11:11:11:11:11:11",
"is_smartnic": true,
"local_link_connection": {

View file

@ -20,9 +20,6 @@
"switch_info": "switch1"
},
"name": "port1",
"description": "Physical Network",
"vendor": "splitrock",
"category": "hypernet",
"node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d",
"physical_network": "physnet1",
"portgroup_uuid": "e43c722c-248e-4c6e-8ce8-0d8ff129387a",

View file

@ -22,9 +22,6 @@
"switch_info": "switch1"
},
"name": "port1",
"description": "Physical Network",
"vendor": "splitrock",
"category": "hypernet",
"node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d",
"physical_network": "physnet1",
"portgroup_uuid": "e43c722c-248e-4c6e-8ce8-0d8ff129387a",

View file

@ -20,9 +20,6 @@
"switch_info": "switch1"
},
"name": "port1",
"description": "Physical Network",
"vendor": "splitrock",
"category": "hypernet",
"node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d",
"physical_network": "physnet1",
"portgroup_uuid": "e43c722c-248e-4c6e-8ce8-0d8ff129387a",

View file

@ -1,6 +1,5 @@
{
"address": "11:11:11:11:11:11",
"category": "hypernet",
"created_at": "2016-08-18T22:28:48.643434+11:11",
"extra": {},
"internal_info": {},
@ -27,7 +26,6 @@
"rel": "bookmark"
}
],
"physical_network": "physnet1",
"properties": {},
"standalone_ports_supported": true,
"updated_at": null,

View file

@ -2,7 +2,6 @@
"portgroups": [
{
"address": "11:11:11:11:11:11",
"category": "hypernet",
"created_at": "2016-08-18T22:28:48.643434+11:11",
"extra": {},
"internal_info": {},
@ -29,7 +28,6 @@
"rel": "bookmark"
}
],
"physical_network": "physnet1",
"properties": {},
"standalone_ports_supported": true,
"updated_at": null,

View file

@ -1,6 +1,5 @@
{
"address": "22:22:22:22:22:22",
"category": "hypernet",
"created_at": "2016-08-18T22:28:48.643434+11:11",
"extra": {},
"internal_info": {},
@ -27,7 +26,6 @@
"rel": "bookmark"
}
],
"physical_network": "physnet1",
"properties": {},
"standalone_ports_supported": true,
"updated_at": "2016-08-18T22:28:49.653974+00:00",

View file

@ -1,19 +0,0 @@
{
"extra": {},
"name": "CUSTOM_AWESOME",
"steps": [
{
"interface": "bios",
"step": "apply_configuration",
"args": {
"settings": [
{
"name": "LogicalProc",
"value": "Enabled"
}
]
},
"order": 1
}
]
}

View file

@ -1,34 +0,0 @@
{
"created_at": "2024-08-18T22:28:48.643434+11:11",
"extra": {},
"links": [
{
"href": "http://10.60.253.180:6385/v1/runbooks/fc6b1709-8dd5-86b0-2d34-5203d1c29127",
"rel": "self"
},
{
"href": "http://10.60.253.180:6385/runbooks/fc6b1709-8dd5-86b0-2d34-5203d1c29127",
"rel": "bookmark"
}
],
"name": "CUSTOM_AWESOME",
"public": false,
"owner": null,
"steps": [
{
"args": {
"settings": [
{
"name": "LogicalProc",
"value": "Enabled"
}
]
},
"interface": "bios",
"order": 1,
"step": "apply_configuration"
}
],
"updated_at": null,
"uuid": "fc6b1709-8dd5-86b0-2d34-5203d1c29127"
}

View file

@ -1,39 +0,0 @@
{
"runbooks": [
{
"created_at": "2024-08-18T22:28:48.643434+11:11",
"disable_ramdisk": false,
"extra": {},
"links": [
{
"href": "http://10.60.253.180:6385/v1/runbooks/fc6b1709-8dd5-86b0-2d34-5203d1c29127",
"rel": "self"
},
{
"href": "http://10.60.253.180:6385/runbooks/fc6b1709-8dd5-86b0-2d34-5203d1c29127",
"rel": "bookmark"
}
],
"name": "CUSTOM_AWESOME",
"public": false,
"owner": null,
"steps": [
{
"args": {
"settings": [
{
"name": "LogicalProc",
"value": "Enabled"
}
]
},
"interface": "bios",
"order": 1,
"step": "apply_configuration"
}
],
"updated_at": null,
"uuid": "fc6b1709-8dd5-86b0-2d34-5203d1c29127"
}
]
}

View file

@ -1,18 +0,0 @@
{
"runbooks": [
{
"links": [
{
"href": "http://10.60.253.180:6385/v1/runbooks/fc6b1709-8dd5-86b0-2d34-5203d1c29127",
"rel": "self"
},
{
"href": "http://10.60.253.180:6385/runbooks/fc6b1709-8dd5-86b0-2d34-5203d1c29127",
"rel": "bookmark"
}
],
"name": "CUSTOM_AWESOME",
"uuid": "fc6b1709-8dd5-86b0-2d34-5203d1c29127"
}
]
}

View file

@ -1,35 +0,0 @@
{
"created_at": "2024-08-18T22:28:48.643434+11:11",
"disable_ramdisk": false,
"extra": {},
"links": [
{
"href": "http://10.60.253.180:6385/v1/runbooks/fc6b1709-8dd5-86b0-2d34-5203d1c29127",
"rel": "self"
},
{
"href": "http://10.60.253.180:6385/runbooks/fc6b1709-8dd5-86b0-2d34-5203d1c29127",
"rel": "bookmark"
}
],
"name": "CUSTOM_AWESOME",
"public": false,
"owner": null,
"steps": [
{
"args": {
"settings": [
{
"name": "LogicalProc",
"value": "Enabled"
}
]
},
"interface": "bios",
"order": 1,
"step": "apply_configuration"
}
],
"updated_at": null,
"uuid": "fc6b1709-8dd5-86b0-2d34-5203d1c29127"
}

View file

@ -1,7 +0,0 @@
[
{
"path" : "/name",
"value" : "CUSTOM_AWESOME2",
"op" : "replace"
}
]

View file

@ -1,34 +0,0 @@
{
"created_at": "2024-08-18T22:28:48.643434+11:11",
"extra": {},
"links": [
{
"href": "http://10.60.253.180:6385/v1/runbooks/fc6b1709-8dd5-86b0-2d34-5203d1c29127",
"rel": "self"
},
{
"href": "http://10.60.253.180:6385/runbooks/fc6b1709-8dd5-86b0-2d34-5203d1c29127",
"rel": "bookmark"
}
],
"name": "CUSTOM_AWESOME2",
"public": false,
"owner": null,
"steps": [
{
"args": {
"settings": [
{
"name": "LogicalProc",
"value": "Enabled"
}
]
},
"interface": "bios",
"order": 1,
"step": "apply_configuration"
}
],
"updated_at": "2024-08-18T22:28:49.653974+00:00",
"uuid": "fc6b1709-8dd5-86b0-2d34-5203d1c29127"
}

View file

@ -2,7 +2,7 @@
"shards": [
{
"count": 47,
"name": "example_shard1"
"name": "example_shard1",
},
{
"count": 46,

View file

@ -3,7 +3,7 @@ ipmitool [default]
ipxe [platform:dpkg default]
ipxe-bootimgs [platform:rpm default]
socat [default]
xinetd [default !platform:centos-9 !platform:rhel-9]
xinetd [default]
tftpd-hpa [platform:dpkg default]
tftp-server [platform:rpm default]
# Starting with Debian Jessie (and thus in Ubuntu Xenial too),
@ -32,7 +32,7 @@ libvirt-bin [platform:dpkg devstack]
libvirt [platform:rpm devstack]
libvirt-dev [platform:dpkg devstack]
libvirt-devel [platform:rpm devstack]
qemu-system [platform:dpkg devstack build-image-dib]
qemu [platform:dpkg devstack build-image-dib]
qemu-kvm [platform:dpkg devstack]
qemu-utils [platform:dpkg devstack build-image-dib]
qemu-system-data [platform:dpkg devstack]
@ -53,6 +53,12 @@ libssl-dev [platform:dpkg test]
libffi-dev [platform:dpkg test]
libffi-devel [platform:rpm test]
# these are needed by infra for python-* jobs
libpq-dev [platform:dpkg test]
libpq-devel [platform:rpm test]
postgresql
postgresql-client [platform:dpkg]
# postgresql-devel [platform:rpm]
postgresql-server [platform:rpm]
mariadb [platform:rpm]
mariadb-server [platform:rpm platform:debian-bookworm]
# mariadb-devel [platform:rpm]
@ -72,14 +78,6 @@ graphviz [!platform:gentoo test doc]
# libsrvg2 is needed for sphinxcontrib-svg2pdfconverter in docs builds.
librsvg2-tools [doc platform:rpm]
librsvg2-bin [doc platform:dpkg]
latexmk [doc]
texlive-collection-fontsrecommended [doc platform:rpm]
tex-gyre [doc platform:dpkg]
texlive-latex-extra [doc platform:dpkg]
texlive-collection-latexextra [doc platform:rpm]
texlive-fonts-extra-links [doc platform:dpkg]
texlive-collection-fontsextra [doc platform:rpm]
# these are needed to build images
@ -92,9 +90,7 @@ libguestfs0 [platform:dpkg imagebuild]
libguestfs [platform:rpm imagebuild devstack]
libguestfs-tools [platform:dpkg devstack]
python3-guestfs [platform:dpkg imagebuild]
qemu-img [platform:redhat devstack]
qemu-tools [platform:suse devstack]
qemu-utils [platform:dpkg devstack]
qemu-img [platform:rpm devstack]
# for TinyIPA build
wget [imagebuild]
python3-pip [imagebuild]
@ -102,10 +98,3 @@ unzip [imagebuild]
sudo [imagebuild]
gawk [imagebuild]
mtools [imagebuild]
# For automatic artifact decompression
zstd [devstack]
# For graphical console support
podman [devstack]
systemd-container [devstack]
buildah [devstack]

View file

@ -8,15 +8,13 @@ fi
# values are: "bios" or "uefi", defaults to "uefi".
IRONIC_BOOT_MODE=${IRONIC_BOOT_MODE:-uefi}
IRONIC_HW_ARCH=${IRONIC_HW_ARCH:-x86_64}
CIRROS_VERSION_DEVSTACK=$(set +o xtrace &&
source $TOP_DIR/stackrc &&
echo $CIRROS_VERSION)
CIRROS_VERSION=${CIRROS_VERSION:-$CIRROS_VERSION_DEVSTACK}
IRONIC_DEFAULT_IMAGE_NAME=cirros-${CIRROS_VERSION}-${IRONIC_HW_ARCH}-uec
IRONIC_DEFAULT_IMAGE_NAME=cirros-${CIRROS_VERSION}-x86_64-uec
IRONIC_IMAGE_NAME=${DEFAULT_IMAGE_NAME:-$IRONIC_DEFAULT_IMAGE_NAME}
@ -37,8 +35,8 @@ function add_image_link {
# Do not restrict downloading image only for specific case. Download both disk and uec images.
# NOTE (vdrok): Here the images are actually pre-cached by devstack, in
# the files folder, so they won't be downloaded again.
add_image_link http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-${IRONIC_HW_ARCH}-uec.tar.gz
add_image_link http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-${IRONIC_HW_ARCH}-disk.img
add_image_link http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-x86_64-uec.tar.gz
add_image_link http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-x86_64-disk.img
export IRONIC_WHOLEDISK_IMAGE_NAME=${IRONIC_WHOLEDISK_IMAGE_NAME:-${IRONIC_IMAGE_NAME/-uec/-disk}}
export IRONIC_PARTITIONED_IMAGE_NAME=${IRONIC_PARTITIONED_IMAGE_NAME:-${IRONIC_IMAGE_NAME/-disk/-uec}}

View file

@ -1,7 +1,7 @@
# NOTE(TheJulia): This is a special bindep file which is independent of the
# project bindep file which is for general usage. This binde pfile is
# intended for execution from Devstack.
# The *primary* purpose being, devstack manages sql dependency management
# intended for execution from Devstack.
# The *primary* purpose being, devstack manages sql dependency mangement
# and credential setup, so they can't be included here or it is installed
# prematurely.
@ -10,7 +10,7 @@ ipmitool [default]
ipxe [platform:dpkg default]
ipxe-bootimgs [platform:rpm default]
socat [default]
xinetd [default !platform:centos-9 !platform:rhel-9]
xinetd [default]
tftpd-hpa [platform:dpkg]
tftp-server [platform:rpm]
# Starting with Debian Jessie (and thus in Ubuntu Xenial too),
@ -21,9 +21,6 @@ tftp-server [platform:rpm]
pxelinux [platform:dpkg]
syslinux
syslinux-common [platform:dpkg]
# On CentOS Stream pxelinux.0 boot loader is in the syslinux-nonlinux
# package.
syslinux-nonlinux [platform:rpm]
isolinux [platform:dpkg]
socat [default]
# Grub2 files for boot loadingusing PXE/GRUB2
@ -34,7 +31,7 @@ libvirt-clients [platform:dpkg]
libvirt [platform:rpm]
libvirt-dev [platform:dpkg]
libvirt-devel [platform:rpm]
qemu-system [platform:dpkg]
qemu [platform:dpkg]
qemu-kvm [platform:dpkg platform:rpm]
qemu-utils [platform:dpkg]
qemu-system-data [platform:dpkg]
@ -46,8 +43,6 @@ ipxe-roms-qemu [platform:rpm]
openvswitch [platform:rpm]
iptables [default]
net-tools [platform:rpm]
# web assets for ironic-novncproxy
novnc [default]
# these are needed to compile Python dependencies from sources
python-dev [platform:dpkg test]
@ -58,6 +53,12 @@ libssl-dev [platform:dpkg test]
libffi-dev [platform:dpkg test]
libffi-devel [platform:rpm test]
# these are needed by infra for python-* jobs
libpq-dev [platform:dpkg test]
libpq-devel [platform:rpm test]
postgresql
postgresql-client [platform:dpkg]
# postgresql-devel [platform:rpm]
postgresql-server [platform:rpm]
mariadb [platform:rpm]
mariadb-server [platform:rpm]
# mariadb-devel [platform:rpm]
@ -88,7 +89,6 @@ kpartx
libguestfs0 [platform:dpkg imagebuild]
libguestfs [platform:rpm imagebuild]
libguestfs-tools [platform:dpkg]
guestfs-tools [platform:rpm imagebuild]
python-guestfs [platform:dpkg imagebuild]
qemu-img [platform:rpm]
# for TinyIPA build

View file

@ -1,73 +0,0 @@
- local_loop:
name: image0
- partitioning:
base: image0
label: gpt
partitions:
- name: ESP
type: 'EF00'
size: 350MiB
mkfs:
type: vfat
mount:
mount_point: /boot/efi
fstab:
options: "defaults"
fsck-passno: 2
- name: BSP
type: 'EF02'
size: 8MiB
- name: root
flags: [ boot ]
size: 6G
- lvm:
name: lvm
base: [ root ]
pvs:
- name: pv
base: root
options: [ "--force" ]
vgs:
- name: vg
base: [ "pv" ]
options: [ "--force" ]
lvs:
- name: lv_root
base: vg
extents: 50%VG
- name: lv_var
base: vg
extents: 15%VG
- name: lv_home
base: vg
extents: 10%VG
- mkfs:
name: fs_root
base: lv_root
type: xfs
label: "img-rootfs"
mount:
mount_point: /
fstab:
options: "rw,relatime"
fsck-passno: 1
- mkfs:
name: fs_var
base: lv_var
type: ext4
mount:
mount_point: /var
fstab:
options: "rw,relatime"
fsck-passno: 2
- mkfs:
name: fs_home
base: lv_home
type: ext4
mount:
mount_point: /home
fstab:
options: "rw,nodev,relatime"
fsck-passno: 2

File diff suppressed because it is too large Load diff

View file

@ -7,7 +7,7 @@
echo_summary "ironic devstack plugin.sh called: $1/$2"
source $DEST/ironic/devstack/lib/ironic
if is_service_enabled ir-api ir-cond ir-novnc; then
if is_service_enabled ir-api ir-cond; then
if [[ "$1" == "stack" ]]; then
if [[ "$2" == "install" ]]; then
# stack/install - Called after the layer 1 and 2 projects source and
@ -37,17 +37,25 @@ if is_service_enabled ir-api ir-cond ir-novnc; then
if [[ "$IRONIC_BAREMETAL_BASIC_OPS" == "True" && "$IRONIC_IS_HARDWARE" == "False" ]]; then
echo_summary "Precreating bridge: $IRONIC_VM_NETWORK_BRIDGE"
if [[ "$Q_BUILD_OVS_FROM_GIT" != "True" ]]; then
if [[ "$Q_BUILD_OVS_FROM_GIT" == "True" ]]; then
if [[ "$Q_AGENT" == "ovn" ]]; then
# If we're here, we were requested to install from git
# for OVN *and* OVS, but that means basic setup has not been
# performed yet. As such, we need to do that and start
# OVN/OVS where as if we just need to ensure OVS is present,
# vendor packaging does that for us. We start early here,
# because neutron setup for this is deferred until too late
# for our plugin to setup the test environment.
echo_summary "Setting up OVN..."
init_ovn
start_ovn
fi
else
# NOTE(TheJulia): We are likely doing this to ensure
# OVS is running.
echo_summary "Installing OVS to pre-create bridge"
install_package openvswitch-switch
fi
if [[ "$Q_AGENT" == "ovn" ]]; then
echo_summary "Setting up OVN..."
init_ovn
start_ovn
fi
sudo ovs-vsctl -- --may-exist add-br $IRONIC_VM_NETWORK_BRIDGE
fi

View file

@ -1,4 +1,4 @@
enable_service ironic ir-api ir-cond ir-novnc ir-sw-sim
enable_service ironic ir-api ir-cond
source $DEST/ironic/devstack/common_settings
@ -27,9 +27,7 @@ fi
# The overhead is essentially another 78 bytes. In order to
# handle both cases, lets go ahead and drop the maximum by
# 78 bytes, while not going below 1280 to make IPv6 work at all.
if [[ "$HOST_TOPOLOGY" == "multinode" ]]; then
# This logic is to artificially pin down the PUBLIC_BRIDGE_MTU for
# when we are using mutlinode architecture, as to transfer the
# bytes over the multinode VXLAN tunnel, we need to drop the mtu.
PUBLIC_BRIDGE_MTU=${OVERRIDE_PUBLIC_BRIDGE_MTU:-$((local_mtu - 78))}
PUBLIC_BRIDGE_MTU=${OVERRIDE_PUBLIC_BRIDGE_MTU:-$((local_mtu - 78))}
if [ $PUBLIC_BRIDGE_MTU -lt 1280 ]; then
PUBLIC_BRIDGE_MTU=1280
fi

View file

@ -9,7 +9,7 @@ if [[ "$VERBOSE" == True ]]; then
fi
CIRROS_VERSION=${CIRROS_VERSION:-0.6.1}
CIRROS_ARCH=${IRONIC_HW_ARCH:-x86_64}
CIRROS_ARCH=${CIRROS_ARCH:-x86_64}
# TODO(dtantsur): use the image cached on infra images in the CI
DISK_URL=http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-disk.img
OUT=$(realpath ${1:-rootfs.img})
@ -54,11 +54,6 @@ sudo mount $efidev $efi_mp
sudo cp -aR $root_mp/* $dest/
sudo cp -aR $efi_mp/EFI $dest/boot/efi/
# Extract all of the stuffs from the disk image and write it out into
# the dest folder. This is *normally* done on startup for Cirros, but
# doesn't quite jive with the expected partition image model.
sudo zcat $root_mp/boot/initrd.img* | sudo cpio -i --make-directories -D $dest
# These locations are required by IPA even when it does not really run
# grub-install.
sudo mkdir -p $dest/{dev,proc,run,sys}

View file

@ -49,7 +49,7 @@ CONSOLE_PTY = """
<target port='0'/>
</serial>
<console type='pty'>
<target port='0'/>
<target type='serial' port='0'/>
</console>
"""
@ -65,8 +65,6 @@ def main():
help='The virtualization engine to use')
parser.add_argument('--arch', default='i686',
help='The architecture to use')
parser.add_argument('--machine_type', default='q35',
help='Machine type based on architecture')
parser.add_argument('--memory', default='2097152',
help="Maximum memory for the VM in KB.")
parser.add_argument('--cpus', default='1',
@ -79,10 +77,6 @@ def main():
help='The number of interfaces to add to VM.'),
parser.add_argument('--mac', default=None,
help='The mac for the first interface on the vm')
parser.add_argument('--mtu', default=None,
help='The mtu for the interfaces on the vm')
parser.add_argument('--net_simulator', default='ovs',
help='Network simulator is in use.')
parser.add_argument('--console-log',
help='File to log console')
parser.add_argument('--emulator', default=None,
@ -95,8 +89,6 @@ def main():
help=('The absolute path of the non-volatile memory '
'to store the UEFI variables. Should be used '
'only when --uefi-loader is also specified.'))
parser.add_argument('--block-size', default='512',
help='The block size for the block storage.')
args = parser.parse_args()
env = jinja2.Environment(loader=jinja2.FileSystemLoader(templatedir))
@ -112,20 +104,16 @@ def main():
'images': images,
'engine': args.engine,
'arch': args.arch,
'machine_type': args.machine_type,
'memory': args.memory,
'cpus': args.cpus,
'bootdev': args.bootdev,
'interface_count': args.interface_count,
'mac': args.mac,
'mtu': args.mtu,
'net_simulator': args.net_simulator,
'nicdriver': args.libvirt_nic_driver,
'emulator': args.emulator,
'disk_format': args.disk_format,
'uefi_loader': args.uefi_loader,
'uefi_nvram': args.uefi_nvram,
'block_size': args.block_size,
}
if args.emulator:
@ -145,7 +133,6 @@ def main():
params['console'] = CONSOLE_LOG % {'console_log': args.console_log}
else:
params['console'] = CONSOLE_PTY
libvirt_template = template.render(**params)
conn = libvirt.open("qemu:///system")

View file

@ -12,7 +12,7 @@ export PS4='+ ${BASH_SOURCE:-}:${FUNCNAME[0]:-}:L${LINENO:-}: '
# Keep track of the DevStack directory
TOP_DIR=$(cd $(dirname "$0")/.. && pwd)
while getopts "n:c:i:m:M:d:a:b:e:E:p:o:f:l:L:N:A:D:v:P:t:B:s:" arg; do
while getopts "n:c:i:m:M:d:a:b:e:E:p:o:f:l:L:N:A:D:v:P:" arg; do
case $arg in
n) NAME=$OPTARG;;
c) CPU=$OPTARG;;
@ -36,9 +36,6 @@ while getopts "n:c:i:m:M:d:a:b:e:E:p:o:f:l:L:N:A:D:v:P:t:B:s:" arg; do
D) NIC_DRIVER=$OPTARG;;
v) VOLUME_COUNT=$OPTARG;;
P) STORAGE_POOL=$OPTARG;;
t) MACHINE_TYPE=$OPTARG;;
B) BLOCK_SIZE=$OPTARG;;
s) NET_SIMULATOR=$OPTARG;;
esac
done
@ -79,8 +76,6 @@ if [ ! -z "$UEFI_LOADER" ]; then
fi
fi
BLOCK_SIZE=${BLOCK_SIZE:-512}
# Create bridge and add VM interface to it.
# Additional interface will be added to this bridge and
# it will be plugged to OVS.
@ -88,29 +83,18 @@ BLOCK_SIZE=${BLOCK_SIZE:-512}
# when VM is in shutdown state
INTERFACE_COUNT=${INTERFACE_COUNT:-1}
if [[ "${NET_SIMULATOR:-ovs}" == "ovs" ]]; then
for int in $(seq 1 $INTERFACE_COUNT); do
ovsif=ovs-${NAME}i${int}
sudo ovs-vsctl --no-wait add-port $BRIDGE $ovsif
for int in $(seq 1 $INTERFACE_COUNT); do
tapif=tap-${NAME}i${int}
ovsif=ovs-${NAME}i${int}
# NOTE(vsaienko) use veth pair here to ensure that interface
# exists in OVS even when VM is powered off.
sudo ip link add dev $tapif type veth peer name $ovsif
for l in $tapif $ovsif; do
sudo ip link set dev $l up
sudo ip link set $l mtu $INTERFACE_MTU
done
else
for int in $(seq 1 $INTERFACE_COUNT); do
# NOTE(TheJulia): A simulator's setup will need to come along
# and identify all of the simulators for required configuration.
# NOTE(TheJulia): It would be way easier if we just sequentally
# numbered *all* interfaces together, but the per-vm execution
# model of this script makes it... difficult.
simif=sim-${NAME}i${int}
tapif=tap-${NAME}i${int}
# NOTE(vsaienko) use veth pair here to ensure that interface
# exists when VMs are turned off.
sudo ip link add dev $tapif type veth peer name $simif || true
for l in $tapif $simif; do
sudo ip link set dev $l up
sudo ip link set $l mtu $INTERFACE_MTU
done
done
fi
sudo ovs-vsctl add-port $BRIDGE $ovsif
done
if [ -n "$MAC_ADDRESS" ] ; then
MAC_ADDRESS="--mac $MAC_ADDRESS"
@ -139,22 +123,13 @@ if ! virsh list --all | grep -q $NAME; then
if [[ -n "$EMULATOR" ]]; then
vm_opts+="--emulator $EMULATOR "
fi
$PYTHON $TOP_DIR/scripts/configure-vm.py \
--bootdev network --name $NAME \
--arch $ARCH --cpus $CPU --memory $MEM --libvirt-nic-driver $LIBVIRT_NIC_DRIVER \
--disk-format $DISK_FORMAT $VM_LOGGING --engine $ENGINE $UEFI_OPTS $vm_opts \
--interface-count $INTERFACE_COUNT $MAC_ADDRESS --machine_type $MACHINE_TYPE \
--block-size $BLOCK_SIZE --mtu ${INTERFACE_MTU} --net_simulator ${NET_SIMULATOR:-ovs} >&2
--interface-count $INTERFACE_COUNT $MAC_ADDRESS >&2
fi
# echo mac in format mac1,ovs-node-0i1;mac2,ovs-node-0i2;...;macN,ovs-node0iN
# NOTE(TheJulia): Based upon the interface format, we need to search for slightly
# different output from the script run because we have to use different attachment
# names.
if [[ "${NET_SIMULATOR:-ovs}" == "ovs" ]]; then
VM_MAC=$(echo -n $(virsh domiflist $NAME |awk '/ovs-/{print $5","$1}')|tr ' ' ';')
else
VM_MAC=$(echo -n $(virsh domiflist $NAME |awk '/tap-/{print $5","$3}')|tr ' ' ';')
fi
VM_MAC=$(echo -n $(virsh domiflist $NAME |awk '/tap-/{print $5","$3}')|tr ' ' ';' |sed s/tap-/ovs-/g)
echo -n "$VM_MAC $VBMC_PORT $PDU_OUTLET"

View file

@ -1,14 +0,0 @@
[Unit]
Description=TFTP server for Ironic
[Service]
ExecStart=
ExecStart=/usr/sbin/in.tftpd -v -v -v -v -v --blocksize %MAX_BLOCKSIZE% --map-file %TFTPBOOT_DIR%/map-file %TFTPBOOT_DIR%
StandardInput=socket
StandardOutput=journal
StandardError=journal
User=root
Group=root
%IPV6_FLAG%

View file

@ -3,7 +3,7 @@
<memory unit='KiB'>{{ memory }}</memory>
<vcpu>{{ cpus }}</vcpu>
<os>
<type arch='{{ arch }}' machine='{{ machine_type }}'>hvm</type>
<type arch='{{ arch }}' machine='q35'>hvm</type>
{% if bootdev == 'network' and not uefi_loader %}
<boot dev='{{ bootdev }}'/>
{% endif %}
@ -14,19 +14,10 @@
{% endif %}
{% endif %}
<bootmenu enable='no'/>
{% if arch != 'aarch64' %}
<bios useserial='yes'/>
{% endif %}
</os>
{% if engine == 'kvm' or arch == 'aarch64' %}
{% if engine == 'kvm' %}
<cpu mode='host-passthrough'/>
{% endif %}
{% if arch == 'aarch64' %}
<cpu mode='custom' match='exact' check='none'>
<model fallback='allow'>cortex-a53</model>
</cpu>
{% endif %}
{% if engine == 'kvm' %}
<cpu mode='host-passthrough'/>
{% else %}
<cpu mode='host-model'/>
{% endif %}
@ -46,24 +37,15 @@
<driver name='qemu' type='{{ disk_format }}' cache='unsafe'/>
<source file='{{ imagefile }}'/>
<target dev='vd{{ letter }}'/>
<blockio logical_block_size="{{ block_size }}" physical_block_size="{{ block_size }}" discard_granularity="{{ block_size }}"/>
</disk>
{% endfor %}
{% for n in range(1, interface_count+1) %}
{% if net_simulator == 'ovs' %}
<interface type='ethernet'>
{% else %}
<interface type='direct'>
{% endif %}
{% if n == 1 and mac %}
<mac address='{{ mac }}'/>
{% endif %}
{% if net_simulator == 'ovs' %}
<target dev='{{ "ovs-" + name + "i" + n|string }}'/>
{% else %}
<source dev='{{ "tap-" + name + "i" + n|string }}'/>
{% endif %}
<model type='{{ nicdriver }}' />
<model type='{{ nicdriver }}'/>
{% if uefi_loader and bootdev == 'network' %}
<boot order='{{ n|string }}'/>
{% endif %}

View file

@ -139,7 +139,7 @@ function destroy {
openstack router unset --external-gateway neutron_grenade || /bin/true
openstack router remove subnet neutron_grenade neutron_grenade || /bin/true
openstack router delete neutron_grenade || /bin/true
openstack network delete neutron_grenade || /bin/true
openstack network neutron_grenade || /bin/true
}
# Dispatcher

View file

@ -66,19 +66,6 @@ if [[ -d $IRONIC_CONF_DIR ]] && [[ ! -d $SAVE_DIR/etc.ironic ]] ; then
cp -pr $IRONIC_CONF_DIR $SAVE_DIR/etc.ironic
fi
# Ironic has an early consumer of a new neutron API, and grenade nor devstack
# has any concept of restarting neutron-rpc-server as it was added in late
# 2024. Ultimately networking-baremetal adding an rpc call which needs the
# updated service running means we need to restart it, for now.
sudo systemctl stop devstack@neutron-rpc-server.service || true
sudo systemctl stop devstack@q-l3.service || true
sudo systemctl stop devstack@q-agt.service || true
sleep 1
sudo systemctl start devstack@neutron-rpc-server.service || true
sudo systemctl start devstack@q-l3.service || true
sudo systemctl start devstack@q-agt.service || true
sleep 1
stack_install_service ironic
# calls upgrade-ironic for specific release
@ -109,7 +96,7 @@ $IRONIC_BIN_DIR/ironic-dbsync --config-file=$IRONIC_CONF_FILE
if [[ "${HOST_TOPOLOGY}" == "multinode" ]]; then
iniset $IRONIC_CONF_FILE DEFAULT pin_release_version ${BASE_DEVSTACK_BRANCH#*/}
else
$IRONIC_BIN_DIR/ironic-dbsync online_data_migrations
ironic-dbsync online_data_migrations
fi
ensure_started='ironic-conductor nova-compute '

View file

@ -1,6 +1,6 @@
openstackdocstheme>=3.5.0 # Apache-2.0
openstackdocstheme>=2.2.0 # Apache-2.0
os-api-ref>=1.4.0 # Apache-2.0
reno>=3.1.0 # Apache-2.0
sphinx>=2.0.0 # BSD
sphinx>=2.0.0,!=2.1.0 # BSD
sphinxcontrib-apidoc>=0.2.0 # BSD
sphinxcontrib-svg2pdfconverter>=0.1.0 # BSD

View file

@ -12,7 +12,11 @@
# License for the specific language governing permissions and limitations
# under the License.
from collections import defaultdict
import inspect
import itertools
import operator
import os.path
from docutils import nodes
from docutils.parsers import rst
@ -54,22 +58,19 @@ def _list_table(add, headers, data, title='', columns=None):
else:
# potentially multi-line string
add(' * %s' % lines[0])
for line in lines[1:]:
add(' %s' % line)
for l in lines[1:]:
add(' %s' % l)
add('')
def _format_doc(doc):
"Format one method docstring to be shown in the step table."
paras = doc.split('\n\n')
formatted_docstring = []
for line in paras:
if line.startswith(':'):
continue
if paras[-1].startswith(':'):
# Remove the field table that commonly appears at the end of a
# docstring.
formatted_docstring.append(line)
return '\n\n'.join(formatted_docstring)
paras = paras[:-1]
return '\n\n'.join(paras)
_clean_steps = {}
@ -87,8 +88,8 @@ def _init_steps_by_driver():
for interface_name in sorted(driver_factory.driver_base.ALL_INTERFACES):
if DEBUG:
LOG.info('[%s] probing available plugins for interface %s',
__name__, interface_name)
LOG.info('[{}] probing available plugins for interface {}'.format(
__name__, interface_name))
loader = stevedore.ExtensionManager(
'ironic.hardware.interfaces.{}'.format(interface_name),
@ -113,8 +114,8 @@ def _init_steps_by_driver():
'doc': _format_doc(inspect.getdoc(method)),
}
if DEBUG:
LOG.info('[%s] interface %r driver %r STEP %r',
__name__, interface_name, plugin.name, step)
LOG.info('[{}] interface {!r} driver {!r} STEP {}'.format(
__name__, interface_name, plugin.name, step))
steps.append(step)
if steps:
@ -152,8 +153,7 @@ class AutomatedStepsDirective(rst.Directive):
result = ViewList()
for interface_name in ['power', 'management', 'firmware',
'deploy', 'bios', 'raid']:
for interface_name in ['power', 'management', 'deploy', 'bios', 'raid']:
interface_info = _clean_steps.get(interface_name, {})
if not interface_info:
continue
@ -167,8 +167,7 @@ class AutomatedStepsDirective(rst.Directive):
_list_table(
title='{} cleaning steps'.format(driver_name),
add=lambda x: result.append(x, source_name),
headers=['Name', 'Details', 'Priority',
'Stoppable', 'Arguments'],
headers=['Name', 'Details', 'Priority', 'Stoppable', 'Arguments'],
columns=[20, 30, 10, 10, 30],
data=(
('``{}``'.format(s['step']),

View file

@ -1,187 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
import os
from sphinx.application import Sphinx
__version__ = "1.0.0"
# Data model #
class Entity:
"""Represents an entity in the profile."""
def __init__(self, name, src):
self.name = name
self.src = src
self.purpose = src.get('Purpose', '')
self.writable = src.get('WriteRequirement') == 'Mandatory'
self.required = (src.get('ReadRequirement') in ('Mandatory', None)
or self.writable)
class ActionParameter(Entity):
"""Represents a parameter in an Action."""
def __init__(self, name, src):
super().__init__(name, src)
self.required_values = src.get('ParameterValues') or []
self.recommended_values = src.get('RecommendedValues') or []
class Action(Entity):
"""Represents an action on a resource."""
def __init__(self, name, src):
super().__init__(name, src)
self.parameters = {
name: ActionParameter(name, value)
for name, value in src.get('Parameters', {}).items()
}
class Resource(Entity):
"""Represents any resource in the profile.
Both top-level resources and nested fields are represented by this class
(but actions are not).
"""
def __init__(self, name, src):
super().__init__(name, src)
self.min_support_values = src.get('MinSupportValues')
self.properties = {
name: Resource(name, value)
for name, value in src.get('PropertyRequirements', {}).items()
}
self.actions = {
name: Action(name, value)
for name, value in src.get('ActionRequirements', {}).items()
}
self.link_to = (src['Values'][0]
if src.get('Comparison') == 'LinkToResource'
else None)
# Rendering #
LEVELS = {0: '=', 1: '-', 2: '~', 3: '^'}
INDENT = ' ' * 4
class NestedWriter:
"""A writer that is nested with indentations."""
def __init__(self, dest, level=0):
self.dest = dest
self.level = level
def text(self, text):
print(INDENT * self.level + text, file=self.dest)
def para(self, text):
self.text(text)
print(file=self.dest)
def _nested_common(self, res):
required = " **[required]**" if res.required else ""
writable = " **[writable]**" if res.writable else ""
self.text(f"``{res.name}``{required}{writable}")
nested = NestedWriter(self.dest, self.level + 1)
if res.purpose:
nested.para(res.purpose)
return nested
def action(self, res):
nested = self._nested_common(res)
for prop in res.parameters.values():
nested.action_parameter(prop)
print(file=self.dest)
def action_parameter(self, res):
self._nested_common(res)
print(file=self.dest)
def resource(self, res):
nested = self._nested_common(res)
for prop in res.properties.values():
nested.resource(prop)
if res.link_to:
# NOTE(dtantsur): this is a bit hacky, but we don't have
# definitions for all possible collections.
split = res.link_to.split('Collection')
if len(split) > 1:
nested.text("Link to a collection of "
f":ref:`Redfish-{split[0]}` resources.")
else:
nested.text(f"Link to a :ref:`Redfish-{res.link_to}` "
"resource.")
print(file=self.dest)
class Writer(NestedWriter):
def __init__(self, dest):
super().__init__(dest)
def title(self, text, level=1):
print(text, file=self.dest)
print(LEVELS[level] * len(text), file=self.dest)
def top_level(self, res):
required = " **[required]**" if res.required else ""
self.para(f".. _Redfish-{res.name}:")
self.title(f"{res.name}")
self.para(f"{res.purpose}{required}")
if res.properties:
self.title("Properties", level=2)
for name, prop in res.properties.items():
self.resource(prop)
if res.actions:
self.title("Actions", level=2)
for name, act in res.actions.items():
self.action(act)
def builder_inited(app: Sphinx):
source = os.path.join(app.srcdir, app.config.redfish_interop_source)
with open(source) as fp:
profile = json.load(fp)
fname = os.path.basename(source).replace('json', 'rst')
dstdir = os.path.join(app.srcdir, app.config.redfish_interop_output_dir)
with open(os.path.join(dstdir, fname), 'wt') as dest:
w = Writer(dest)
w.title(f"{profile['ProfileName']} {profile['ProfileVersion']}", 0)
w.para(profile['Purpose'])
try:
for name, value in sorted(
(name, value)
for name, value in profile['Resources'].items()
):
w.top_level(Resource(name, value))
except Exception:
import traceback
traceback.print_exc()
raise
def setup(app: Sphinx):
app.connect('builder-inited', builder_inited)
app.add_config_value('redfish_interop_source', None, 'env', [str])
app.add_config_value('redfish_interop_output_dir', None, 'env', [str])
return {'version': __version__}

View file

@ -78,7 +78,7 @@ def parse_field_list(content):
def create_bullet_list(input_dict, input_build_env):
"""Convert input_dict into a sphinx representation of a bullet list."""
"""Convert input_dict into a sphinx representaion of a bullet list."""
grp_field = GroupedField('grp_field', label='title')
bullet_list = nodes.paragraph()
@ -138,7 +138,7 @@ def split_list(input_list):
"""Split input_list into three sub-lists.
This function splits the input_list into three, one list containing the
initial non-empty items, one list containing items appearing after the
inital non-empty items, one list containing items appearing after the
string 'Success' in input_list; and the other list containing items
appearing after the string 'Failure' in input_list.
"""
@ -272,8 +272,7 @@ class Parameters(Directive):
for field_name in input_dict:
old_field_body = input_dict[field_name]
if old_field_body in yaml_data.keys():
input_dict[field_name] = \
yaml_data[old_field_body]["description"]
input_dict[field_name] = yaml_data[old_field_body]["description"]
# Convert dictionary to bullet list format
params_build_env = self.state.document.settings.env
@ -328,8 +327,7 @@ class Return(Directive):
failure_detail = create_bullet_list(failure_dict, ret_build_env)
ret_table_contents += failure_detail
if (len(initial_list) > 0 or len(success_list) > 0 or
len(proc_fail_list) > 0):
if len(initial_list) > 0 or len(success_list) > 0 or len(proc_fail_list) > 0:
# Create a table to display the final Returns directive output
ret_table = create_table('Returns', ret_table_contents)
return [ret_table]

View file

@ -18,7 +18,7 @@ states, which will prevent the node from being seen by the Compute
service as ready for use.
This feature is leveraged as part of the state machine workflow,
where a node in ``manageable`` can be moved to an ``active`` state
where a node in ``manageable`` can be moved to ``active`` state
via the provision_state verb ``adopt``. To view the state
transition capabilities, please see :ref:`states`.
@ -48,7 +48,7 @@ required boot image, or boot ISO image and then places any PXE or virtual
media configuration necessary for the node should it be required.
The adoption process makes no changes to the physical node, with the
exception of operator-supplied configurations where virtual media is
exception of operator supplied configurations where virtual media is
used to boot the node under normal circumstances. An operator should
ensure that any supplied configuration defining the node is sufficient
for the continued operation of the node moving forward.
@ -56,7 +56,7 @@ for the continued operation of the node moving forward.
Possible Risk
=============
The main risk with this feature is that the supplied configuration may ultimately
The main risk with this feature is that supplied configuration may ultimately
be incorrect or invalid which could result in potential operational issues:
* ``rebuild`` verb - Rebuild is intended to allow a user to re-deploy the node
@ -143,7 +143,7 @@ from the ``manageable`` state to ``active`` state::
.. NOTE::
In the above example, the image_source setting must reference a valid
image or file, however, that image or file can ultimately be empty.
image or file, however that image or file can ultimately be empty.
.. NOTE::
The above example utilizes a capability that defines the boot operation
@ -154,7 +154,7 @@ from the ``manageable`` state to ``active`` state::
The above example will fail a re-deployment as a fake image is
defined and no instance_info/image_checksum value is defined.
As such any actual attempt to write the image out will fail as the
image_checksum value is only validated at the time of an actual
image_checksum value is only validated at time of an actual
deployment operation.
.. NOTE::
@ -165,9 +165,10 @@ from the ``manageable`` state to ``active`` state::
baremetal node set <node name or uuid> --instance-uuid <uuid>
.. NOTE::
A user of this feature may wish to add new nodes with a
``network_interface`` value of ``noop`` and then change the interface
at a later point and time.
In Newton, coupled with API version 1.20, the concept of a
network_interface was introduced. A user of this feature may wish to
add new nodes with a network_interface of ``noop`` and then change
the interface at a later point and time.
Troubleshooting
===============
@ -175,7 +176,7 @@ Troubleshooting
Should an adoption operation fail for a node, the error that caused the
failure will be logged in the node's ``last_error`` field when viewing the
node. This error, in the case of node adoption, will largely be due to
the failure of a validation step. Validation steps are dependent
failure of a validation step. Validation steps are dependent
upon what driver is selected for the node.
Any node that is in the ``adopt failed`` state can have the ``adopt`` verb
@ -183,7 +184,7 @@ re-attempted. Example::
baremetal node adopt <node name or uuid>
If a user wishes to cancel their attempt at adopting, they can then move
If a user wishes to abort their attempt at adopting, they can then move
the node back to ``manageable`` from ``adopt failed`` state by issuing the
``manage`` verb. Example::
@ -204,18 +205,18 @@ Adoption with Nova
Since there is no mechanism to create bare metal instances in Nova when nodes
are adopted into Ironic, the node adoption feature described above cannot be
used to add in production nodes to deployments that use Ironic together with
used to add in production nodes to deployments which use Ironic together with
Nova.
One option to add production nodes to an Ironic/Nova deployment is to use
One option to add in production nodes to an Ironic/Nova deployment is to use
the fake drivers. The overall idea is that for Nova the nodes are instantiated
normally to ensure the instances are properly created in the compute project
while Ironic does not touch them.
Here are some high-level steps to be used as a guideline:
Here are some high level steps to be used as a guideline:
* create a bare metal flavor and a hosting project for the instances
* enroll the nodes into Ironic, create the ports, and move them to manageable
* enroll the nodes into Ironic, create the ports, move them to manageable
* change the hardware type and the interfaces to fake drivers
* provide the nodes to make them available
* one by one, add the nodes to the placement aggregate and create instances

View file

@ -17,22 +17,22 @@ How it works
The expected workflow is as follows:
#. The node is discovered by manually powering it on and gets the
``manual-management`` hardware type and ``agent`` power interface.
`manual-management` hardware type and `agent` power interface.
If discovery is not used, a node can be enrolled through the API and then
powered on manually.
#. The operator moves the node to ``manageable``. It works because the ``agent``
#. The operator moves the node to `manageable`. It works because the `agent`
power only requires to be able to connect to the agent.
#. The operator moves the node to ``available``. Cleaning happens normally via
the already running agent. If a reboot is needed, it is done by telling the
#. The operator moves the node to `available`. Cleaning happens normally via
the already running agent. If reboot is needed, it is done by telling the
agent to reboot the node in-band.
#. A user deploys the node. Deployment happens normally via the already
running agent.
#. At the end of the deployment, the node is rebooted via the reboot command
#. In the end of the deployment, the node is rebooted via the reboot command
instead of power off+on.
Enabling
@ -59,6 +59,10 @@ As usual with the ``noop`` management, enable the networking boot fallback:
[pxe]
enable_netboot_fallback = true
If using discovery, :ironic-inspector-doc:`configure discovery in
ironic-inspector <user/usage.html#discovery>` with the default driver set
to ``manual-management``.
Limitations
===========
@ -66,7 +70,7 @@ Limitations
* Undeploy and rescue are not supported, you need to add BMC credentials first.
* If any errors happen in the process, recovery will likely require BMC
* If any errors happens in the process, recovery will likely require BMC
credentials.
* Only rebooting is possible through the API, power on/off commands will fail.

View file

@ -25,29 +25,29 @@ How it works
These tokens are provided in one of two ways to the running agent.
1. A pre-generated token that is embedded into virtual media ISOs.
2. A one-time generated token that is provided upon the first "lookup"
1. A pre-generated token which is embedded into virtual media ISOs.
2. A one-time generated token that are provided upon the first "lookup"
of the node.
In both cases, the tokens are randomly generated using the Python
In both cases, the tokens are a randomly generated using the Python
``secrets`` library. As of mid-2020, the default length is 43 characters.
Once the token has been provided, the token cannot be retrieved or accessed.
It remains available to the conductors and is stored in the memory of the
It remains available to the conductors, and is stored in memory of the
``ironic-python-agent``.
.. note::
In the case of the token being embedded with virtual media, it is read
from a configuration file within the image. Ideally, this should be paired
from a configuration file with-in the image. Ideally this should be paired
with Swift temporary URLs.
With the token is available in memory in the agent, the token is embedded with
``heartbeat`` operations to the ironic API endpoint. This enables the API to
authenticate the heartbeat request, and refuse "heartbeat" requests from the
``ironic-python-agent``. As of the Victoria release, the use of Agent Token is
``ironic-python-agent``. As of the Victoria release, use of Agent Token is
required for all agents and the previously available setting to force this
functionality to be mandatory, ``[DEFAULT]require_agent_token`` has been removed
and no longer has any effect.
functionality to be mandatory, ``[DEFAULT]require_agent_token`` no longer has
any effect.
.. warning::
If the Bare Metal Service is updated, and the version of
@ -73,10 +73,10 @@ With PXE/iPXE/etc.
Agent Configuration
===================
An additional setting that may be leveraged with the ``ironic-python-agent``
An additional setting which may be leveraged with the ``ironic-python-agent``
is a ``agent_token_required`` setting. Under normal circumstances, this
setting can be asserted via the configuration supplied from the Bare Metal
service deployment upon the ``lookup`` action but can be asserted via the
service deployment upon the ``lookup`` action, but can be asserted via the
embedded configuration for the agent in the ramdisk. This setting is also
available via the kernel command line as ``ipa-agent-token-required``.
available via kernel command line as ``ipa-agent-token-required``.

View file

@ -2,7 +2,7 @@ Deploying with anaconda deploy interface
========================================
Ironic supports deploying an OS with the `anaconda`_ installer.
This anaconda deploy interface *ONLY* works with ``pxe`` and ``ipxe`` boot interfaces.
This anaconda deploy interface works with ``pxe`` and ``ipxe`` boot interfaces.
Configuration
-------------
@ -22,13 +22,13 @@ This change takes effect after all the ironic conductors have been
restarted.
The default kickstart template is specified via the configuration option
:oslo.config:option:`anaconda.default_ks_template`. It is set to this `ks.cfg.template`_
``[anaconda]default_ks_template``. It is set to this `ks.cfg.template`_
but can be modified to be some other template.
.. code-block:: ini
[anaconda]
default_ks_template = /etc/ironic/ks.cfg.template
default_ks_template = file:///etc/ironic/ks.cfg.template
When creating an ironic node, specify ``anaconda`` as the deploy interface.
@ -64,7 +64,7 @@ package groups that need to be in the image:
install cloud-init
ts run
An OS tarball can be created using the following set of commands, along with the above
An OS tarball can be created using following set of commands, along with the above
``baremetal.yum`` file:
.. code-block:: shell
@ -102,104 +102,48 @@ The kernel and ramdisk can be found at ``/images/pxeboot/vmlinuz`` and
image can be normally found at ``/LiveOS/squashfs.img`` or
``/images/install.img``.
The anaconda deploy driver uses the following image properties from glance,
which are all optional depending on how you create your bare metal server:
The OS tarball must be configured with the following properties in glance, in
order to be used with the anaconda deploy driver:
* ``kernel_id``
* ``ramdisk_id``
* ``stage2_id``
* ``ks_template``
* ``disk_file_extension``
* ``disk_file_extension`` (optional)
All except ``disk_file_extension`` are glance image IDs. They can be prefixed
with ``glance://``.
Valid ``disk_file_extension`` values are:
* ``.img``
* ``.tar``
* ``.tbz``
* ``.tgz``
* ``.txz``
* ``.tar.gz``
* ``.tar.bz2``
* ``.tar.xz``
When the ``disk_file_extension`` property is not set to one of the above valid
values the anaconda installer will assume that the image provided is a mountable
Valid ``disk_file_extension`` values are ``.img``, ``.tar``, ``.tbz``,
``.tgz``, ``.txz``, ``.tar.gz``, ``.tar.bz2``, and ``.tar.xz``. When
``disk_file_extension`` property is not set to one of the above valid values
the anaconda installer will assume that the image provided is a mountable
OS disk.
An example of creating the necessary glance images with the anaconda files
and the OS tarball and setting properties to refer to components can be seen below.
.. Note:: The various images must be shared except for the OS image
with the properties set. This image must be set to public.
See `bug 2099276 <https://bugs.launchpad.net/ironic/+bug/2099276>`_ for
more details.
This is an example of adding the anaconda-related images and the OS tarball to
glance:
.. code-block:: shell
# vmlinuz
openstack image create --container-format bare --disk-format raw --shared \
--file ./vmlinuz anaconda-kernel-<version>
openstack image create --file ./vmlinuz --container-format aki \
--disk-format aki --shared anaconda-kernel-<version>
openstack image create --file ./initrd.img --container-format ari \
--disk-format ari --shared anaconda-ramdisk-<version>
openstack image create --file ./squashfs.img --container-format ari \
--disk-format ari --shared anaconda-stage-<verison>
openstack image create --file ./os-image.tar.gz \
--container-format bare --disk-format raw --shared \
--property kernel_id=<glance_uuid_vmlinuz> \
--property ramdisk_id=<glance_uuid_ramdisk> \
--property stage2_id=<glance_uuid_stage2> disto-name-version \
--property disk_file_extension=.tgz
# initrd/initramfs/ramdisk
openstack image create --container-format bare --disk-format raw --shared \
--file ./initrd.img anaconda-ramdisk-<version>
Creating a bare metal server
----------------------------
# squashfs/stage2
openstack image create --container-format bare --disk-format raw --shared \
--file ./squashfs.img anaconda-stage2-<version>
KERNEL_ID=$(openstack image show -f value -c id anaconda-kernel-<version>)
RAMDISK_ID=$(openstack image show -f value -c id anaconda-ramdisk-<version>)
STAGE2_ID=$(openstack image show -f value -c id anaconda-stage2-<version>)
# the actual OS image we'll use as our source
openstack image create --container-format bare --disk-format raw --public \
--property kernel_id=${KERNEL_ID} \
--property ramdisk_id=${RAMDISK_ID} \
--property stage2_id=${STAGE2_ID} \
--property disk_file_extension=.tgz \
--file ./os-image.tar.gz \
my-anaconda-based-os-<version>
Deploying a node
----------------
To be able to deploy a node with the anaconda deploy interface the node's
``instance_info`` must have an ``image_source`` at a minimum but depending
on how your node is being deployed more fields must be populated.
If you are using Ironic via Nova then it will only set the ``image_source``
on ``instance_info`` so the following image properties are required:
* ``kernel_id``
* ``ramdisk_id``
* ``stage2_id``
You may optionally upload a custom kickstart template to glance an associate
it to the OS image via the ``ks_template`` property.
.. code-block:: shell
openstack server create --image my-anaconda-based-os-<version> ...
If you are not using Ironic via Nova then all properties except
``disk_file_extension`` can be supplied via ``instance_info`` or via the
OS image properties. The values in ``instance_info`` will take precedence
over those specified in the OS image. However most of their names are
slightly altered.
* ``kernel_id`` OS image property is ``kernel`` in ``instance_info``
* ``ramdisk_id`` OS image property is ``ramdisk`` in ``instance_info``
* ``stage2_id`` OS image property is ``stage2`` in ``instance_info``
Only the ``ks_template`` property remains the same in ``instance_info``.
.. Note:: If no ``ks_template`` is supplied then
:oslo.config:option:`anaconda.default_ks_template` will be used.
Apart from uploading a custom kickstart template to glance and associating it
with the OS image via the ``ks_template`` property in glance, operators can
also set the kickstart template in the ironic node's ``instance_info`` field.
The kickstart template set in ``instance_info`` takes precedence over the one
specified via the OS image in glance. If no kickstart template is specified
(via the node's ``instance_info`` or ``ks_template`` glance image property),
the default kickstart template will be used to deploy the OS.
This is an example of how to set the kickstart template for a specific
ironic node:
@ -209,32 +153,23 @@ ironic node:
openstack baremetal node set <node> \
--instance_info ks_template=glance://uuid
Ultimately to deploy your node it must be able to find the kernel, the
ramdisk, the stage2 file, and your OS image via glance image properties
or via ``instance_info``.
.. code-block:: shell
openstack baremetal node set <node> \
--instance_info image_source=glance://uuid
.. warning::
In the Ironic Project terminology, the word ``template`` often refers to
a file that is supplied to the deployment, which Ironic supplies
a file which is supplied to the deployment, which Ironic supplies
parameters to render a specific output. One critical example of this in
the Ironic workflow, specifically with this driver, is that the generated
``agent token`` is conveyed to the booting ramdisk, facilitating it to call
back to Ironic and indicate the state. This token is randomly generated
for every deploy and is required. Specifically, this is leveraged in the
for every deploy, and is required. Specifically this is leveraged in the
template's ``pre``, ``onerror``, and ``post`` steps.
For more information on Agent Token, please see :doc:`/admin/agent-token`.
For more infomation on Agent Token, please see :doc:`/admin/agent-token`.
Standalone deployments
----------------------
While this deployment interface driver was developed around the use of other
OpenStack services, it is not explicitly required. For example, HTTP(S) URLs
can be supplied by the API user to explicitly set the expected baremetal node
OpenStack services, it is not explicitly required. For example HTTP(S) URLs
can be supplied by the API user to explictly set the expected baremetal node
``instance_info`` fields
.. code-block:: shell
@ -247,7 +182,7 @@ can be supplied by the API user to explicitly set the expected baremetal node
When doing so, you may wish to also utilize a customized kickstart template,
which can also be a URL. Please reference the ironic community provided
template *ks.cfg.template* and use it as a basis for your own kickstart
template *ks.cfg.template* and use it as a basis of your own kickstart
as it accounts for the particular stages and appropriate callbacks to
Ironic.
@ -288,7 +223,7 @@ At this point, you should be able to request the baremetal node to deploy.
Standalone using a repository
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Anaconda supports the concept of passing a repository as opposed to a dedicated
Anaconda supports a concept of passing a repository as opposed to a dedicated
URL path which has a ``.treeinfo`` file, which tells the initial boot scripts
where to get various dependencies, such as what would be used as the anaconda
``stage2`` ramdisk. Unfortunately, this functionality is not well documented.
@ -317,11 +252,11 @@ parameter, and the node deployed.
Deployment Process
------------------
At a high level, the mechanics of the anaconda driver work in the following
At a high level, the mechanics of the anaconda driver works in the following
flow, where we also note the stages and purpose of each part for informational
purposes.
#. Network Boot Program (Such as iPXE) downloads the kernel and initial
#. Network Boot Program (Such as iPXE) downloads the kernel, and initial
ramdisk.
#. Kernel launches, uncompresses initial ramdisk, and executes init inside
of the ramdisk.
@ -345,17 +280,17 @@ part due to the general defaults being set to much lower values for image
based deployments, but the way the anaconda deployment interface works,
you may need to make some adjustments.
* :oslo.config:option:`conductor.deploy_callback_timeout` likely needs to be adjusted
for most ``anaconda`` deployment interface users. By default, this
is a timer that looks for "agents" that have not checked in with
* ``[conductor]deploy_callback_timeout`` likely needs to be adjusted
for most ``anaconda`` deployment interface users. By default this
is a timer which looks for "agents" which have not checked in with
Ironic, or agents which may have crashed or failed after they
started. If the value is reached, then the current operation is failed.
This value should be set to a number of seconds which exceeds your
average anaconda deployment time.
* :oslo.config:option:`pxe.boot_retry_timeout` can also be triggered and result in
* ``[pxe]boot_retry_timeout`` can also be triggered and result in
an anaconda deployment in progress getting reset as it is intended
to reboot nodes that might have failed their initial PXE operation.
Depending on the sizes of images, and the exact nature of what was deployed,
to reboot nodes which might have failed their initial PXE operation.
Depending on sizes of images, and the exact nature of what was deployed,
it may be necessary to ensure this is a much higher value.
Limitations
@ -364,7 +299,7 @@ Limitations
* This deploy interface has only been tested with Red Hat based operating
systems that use anaconda. Other systems are not supported.
* Runtime TLS certificate injection into ramdisks is not supported. Assets
* Runtime TLS certifiate injection into ramdisks is not supported. Assets
such as ``ramdisk`` or a ``stage2`` ramdisk image need to have trusted
Certificate Authority certificates present within the images *or* the
Ironic API endpoint utilized should utilize a known trusted Certificate

View file

@ -4,21 +4,20 @@
API Audit Logging
=================
Audit middleware supports the delivery of CADF audit events via the Oslo messaging
notifier capability. Based on the ``notification_driver`` configuration, audit
event can be routed to messaging infrastructure (notification_driver =
messagingv2) or can be routed to a log file (
``[oslo_messaging_notifications]/driver = log``).
Audit middleware supports delivery of CADF audit events via Oslo messaging
notifier capability. Based on `notification_driver` configuration, audit events
can be routed to messaging infrastructure (notification_driver = messagingv2)
or can be routed to a log file (`[oslo_messaging_notifications]/driver = log`).
Audit middleware creates two events per REST API interaction. The first event has
Audit middleware creates two events per REST API interaction. First event has
information extracted from request data and the second one has request outcome
(response).
Enabling API Audit Logging
==========================
Audit middleware is available as part of ``keystonemiddleware`` (>= 1.6)
library. For information regarding how audit middleware functions refer
Audit middleware is available as part of `keystonemiddleware` (>= 1.6) library.
For information regarding how audit middleware functions refer
:keystonemiddleware-doc:`here <audit.html>`.
Auditing can be enabled for the Bare Metal service by making the following changes
@ -31,17 +30,17 @@ to ``/etc/ironic/ironic.conf``.
enabled=true
#. To customize auditing API requests, the audit middleware requires the audit_map_file setting
to be defined. Update the value of the configuration setting 'audit_map_file' to set its
to be defined. Update the value of configuration setting 'audit_map_file' to set its
location. Audit map file configuration options for the Bare Metal service are included
in the etc/ironic/ironic_api_audit_map.conf.sample file. To understand CADF format
specified in ironic_api_audit_map.conf file, refer to `CADF Format.
specified in ironic_api_audit_map.conf file refer to `CADF Format.
<http://www.dmtf.org/sites/default/files/standards/documents/DSP2038_1.0.0.pdf>`_::
[audit]
...
audit_map_file=/etc/ironic/api_audit_map.conf
#. Comma-separated list of Ironic REST API HTTP methods to be ignored during audit.
#. Comma separated list of Ironic REST API HTTP methods to be ignored during audit.
It is used only when API audit is enabled. For example::
[audit]
@ -51,7 +50,7 @@ to ``/etc/ironic/ironic.conf``.
Sample Audit Event
==================
Following is the sample of the audit event for the ironic node list request.
Following is the sample of audit event for ironic node list request.
.. code-block:: json

View file

@ -1,8 +0,0 @@
Architecture and Implementation Details
=======================================
.. toctree::
:maxdepth: 1
Agent Token <agent-token>
Steps <steps>

View file

@ -1,583 +0,0 @@
.. meta::
:description: Implement availability zones with Ironic using conductor groups and shards. Multi-datacenter deployments, fault tolerance, and resource partitioning strategies.
:keywords: availability zones, conductor groups, shards, fault tolerance, multi-datacenter, resource partitioning, high availability, geographic distribution
:author: OpenStack Ironic Team
:robots: index, follow
:audience: cloud architects, system administrators
==========================================
Availability Zones and Resource Isolation
==========================================
Overview
========
While Ironic does not implement traditional OpenStack Availability Zones like
Nova and Neutron, it provides a **three-tier approach** for resource
partitioning and isolation that achieves comprehensive availability
zone functionality:
* **Multiple Ironic Deployments**: Completely separate Ironic services
targeted by different Nova compute nodes
* **Conductor Groups**: Physical/geographical resource partitioning within
a deployment
* **Shards**: Logical grouping for operational scaling within a deployment
This document explains how these mechanisms work together and how to achieve
sophisticated availability zone functionality across your infrastructure.
This document does **not** cover similar effect which can be achieved
through the use of API level Role Based Access Control through the
``owner`` and ``lessee`` fields.
.. contents:: Table of Contents
:local:
:depth: 2
Comparison with Other OpenStack Services
========================================
+------------------+-------------------+------------------------+
| Service | Mechanism | Purpose |
+==================+===================+========================+
| Nova | Availability | Instance placement |
| | Zones (host | across fault domains |
| | aggregates) | |
+------------------+-------------------+------------------------+
| Neutron | Agent AZs | Network service HA |
+------------------+-------------------+------------------------+
| **Ironic** | **Multiple | **Complete service |
| | Deployments + | isolation + physical |
| | Conductor Groups | partitioning + |
| | + Shards** | operational scaling** |
+------------------+-------------------+------------------------+
Ironic's Three-Tier Approach
=============================
Tier 1: Multiple Ironic Deployments
------------------------------------
The highest level of isolation involves running **completely separate
Ironic services** that Nova and other API users can target independently.
**Use Cases**:
* Complete geographic separation (different regions/countries)
* Regulatory compliance requiring full data isolation
* Independent upgrade cycles and operational teams
**Implementation**: Configure separate Nova compute services to target
different Ironic deployments using Nova's Ironic driver configuration.
**Benefits**:
* Complete fault isolation - failure of one deployment doesn't affect others
* Independent scaling, upgrades, and maintenance
* Different operational policies per deployment
* Complete API endpoint separation
Tier 2: Conductor Groups (Physical Partitioning)
-------------------------------------------------
Within a single Ironic deployment, conductor groups provides
**physical resource partitioning**.
**Use Cases**:
* Separate nodes by datacenter/availability zone within a region
* Separate nodes by conductor groups for conductor resource management
* Isolate hardware types or vendors
* Create fault domains for high availability
* Manage nodes with different network connectivity
Conductor groups control **which conductor manages which nodes**.
Each conductor can be assigned to a specific group, and will only
manage nodes that belong to the same group.
A classical challenge of Ironic is that it is able to manage far more
Bare Metal nodes than a single ``nova-compute`` service is designed to
support. The primary answer for this issue is to leverage Shards first,
and then continue to evolve based upon operational needs.
See: :doc:`conductor-groups` for detailed configuration.
.. _availability-zones-shards:
Tier 3: Shards (Logical Partitioning)
--------------------------------------
The finest level of granularity for **operational and client-side grouping**.
**Use Cases**:
* Horizontal scaling of operations
* Parallelize maintenance tasks
* Create logical groupings for different teams
Shards can be used by clients, including Nova, to limit the scope of their
requests to a logical and declared subset of nodes which prevents multiple
``nova-compute`` services from being able to see and work with the same
node on multiple ``nova-compute`` services.
.. note::
Shards are client-side constructs - Ironic itself does not use shard
values internally.
.. versionadded:: 1.82
Shard support was added in API version 1.82.
.. warning::
Once set, a shard should not be changed. Nova's model of leveraging the
Ironic API does not permit this value to be changed after the fact.
Common Deployment Patterns
===========================
Pattern 1: Multi-Region with Complete Isolation
------------------------------------------------
**Use Case**: Global deployment with regulatory compliance
**Implementation**:
- **Multiple Deployments**: ``ironic-us-east``, ``ironic-eu-west``, ``ironic-apac``
- **Nova Configuration**: Separate compute services per region
- **Conductor Groups**: Optional within each deployment
- **Shards**: Operational grouping within regions
**Example Nova Configuration**:
.. code-block:: ini
# nova-compute for US East region
[ironic]
auth_url = https://keystone-us-east.example.com/v3
endpoint_override = https://ironic-us-east.example.com
# nova-compute for EU West region
[ironic]
auth_url = https://keystone-eu-west.example.com/v3
endpoint_override = https://ironic-eu-west.example.com
.. note::
The above indicated ``endpoint_override`` configuration is provided
for illustrative purposes to stress endpoints would be distinctly
different.
Pattern 2: Single Region with Datacenter Separation
----------------------------------------------------
**Use Case**: Metro deployment across multiple datacenters
**Implementation**:
- **Single Deployment**: One Ironic service
- **Conductor Groups**: ``datacenter-1``, ``datacenter-2``, ``datacenter-3``
- **Nova Configuration**: Target specific conductor groups
- **Shards**: Optional operational grouping
In this case, we don't expect BMC management network access to occur between
datacenters. Thus each datacenter is configured with it's own group of
conductors.
**Example Configuration**:
.. code-block:: bash
# Configure Nova compute to target specific conductor group
[ironic]
conductor_group = datacenter-1
# Configure conductors (ironic.conf)
[conductor]
conductor_group = datacenter-1
# Assign nodes
baremetal node set --conductor-group datacenter-1 <node-uuid>
.. note::
Some larger operators who leverage conductor groups have suggested
that it is sometimes logical to have a conductor set without a
``conductor_group`` set. This helps prevent orphaning nodes because
Ironic routes all changes to the conductor which presently manages
the node.
Pattern 3: Operational Scaling Within Datacenters
--------------------------------------------------
**Use Case**: Large deployment requiring parallel operations
**Implementation**:
- **Single Deployment**: One Ironic service
- **Conductor Groups**: By datacenter or hardware type
- **Shards**: Operational batches for maintenance/upgrades
- **Nova Configuration**: May target specific conductor groups
**Example**:
.. code-block:: bash
# Set up conductor groups by hardware
baremetal node set --conductor-group dell-servers <node-uuid-1>
baremetal node set --conductor-group hpe-servers <node-uuid-2>
# Create operational shards for maintenance
baremetal node set --shard maintenance-batch-1 <node-uuid-1>
baremetal node set --shard maintenance-batch-2 <node-uuid-2>
Pattern 4: Hybrid Multi-Tier Approach
--------------------------------------
**Use Case**: Complex enterprise deployment
**Implementation**: All three tiers working together
**Example Architecture**:
.. code-block:: bash
# Deployment 1: Production East Coast
# Nova compute service targets ironic-prod-east
[ironic]
endpoint_override = https://ironic-prod-east.example.com
conductor_group = datacenter-east
# Within this deployment:
baremetal node set --conductor-group datacenter-east --shard prod-batch-a <node-uuid>
# Deployment 2: Production West Coast
# Nova compute service targets ironic-prod-west
[ironic]
endpoint_override = https://ironic-prod-west.example.com
conductor_group = datacenter-west
Nova Integration and Configuration
==================================
Targeting Multiple Ironic Deployments
--------------------------------------
Nova's Ironic driver can be configured to target different Ironic services:
**Per-Compute Service Configuration**:
.. code-block:: ini
# /etc/nova/nova.conf on compute-service-1
[ironic]
auth_url = https://keystone-region1.example.com/v3
endpoint_override = https://ironic-region1.example.com
conductor_group = region1-zone1
# /etc/nova/nova.conf on compute-service-2
[ironic]
auth_url = https://keystone-region2.example.com/v3
endpoint_override = https://ironic-region2.example.com
conductor_group = region2-zone1
**Advanced Options**:
.. code-block:: ini
[ironic]
# Target specific conductor group within deployment
conductor_group = datacenter-east
# Target specific shard within deployment
shard = production-nodes
# Connection retry configuration
api_max_retries = 60
api_retry_interval = 2
.. seealso::
`Nova Ironic Hypervisor Configuration <https://github.com/openstack/nova/blob/master/doc/source/admin/configuration/hypervisor-ironic.rst>`_
for complete Nova configuration details.
Scaling Considerations
----------------------
**Nova Compute Service Scaling**:
* Single nova-compute can handle several hundred Ironic nodes efficiently.
* Consider multiple compute services for >1000 nodes per deployment.
Nova-compute is modeled on keeping a relatively small number of "instances"
per nova-compute process. For example, 250 baremetal nodes.
* One nova-compute process per conductor group or shard is expected.
* A ``conductor_group`` which is independent of a nova-compute service
configuration can be changed at any time. A shard should never be
changed once it has been introduced to a nova-compute process.
**Multi-Deployment Benefits**:
* Independent scaling per deployment
* Isolated failure domains
* Different operational schedules
Integration Considerations
==========================
Network Considerations
----------------------
Ironic's partitioning works alongside physical network configuration:
* Physical networks can span multiple conductor groups
* Consider network topology when designing conductor group boundaries
* Ensure network connectivity between conductors and their assigned nodes
.. seealso::
:doc:`networking` for detailed network configuration guidance
Nova Placement and Scheduling
------------------------------
When using Ironic with Nova:
* Nova's availability zones operate independently of Ironic's partitioning
* Use resource classes and traits for capability-based scheduling
.. seealso::
:doc:`../install/configure-nova-flavors` for flavor and scheduling configuration
API Client Usage
================
Working Across Multiple Deployments
------------------------------------
When managing multiple Ironic deployments, use separate client configurations:
.. code-block:: bash
# Configure client for deployment 1
export OS_AUTH_URL=https://keystone-east.example.com/v3
export OS_ENDPOINT_OVERRIDE=https://ironic-east.example.com
baremetal node list
# Configure client for deployment 2
export OS_AUTH_URL=https://keystone-west.example.com/v3
export OS_ENDPOINT_OVERRIDE=https://ironic-west.example.com
baremetal node list
Filtering by Conductor Group
-----------------------------
.. code-block:: bash
# List nodes by conductor group
baremetal node list --conductor-group datacenter-east
# List ports by node conductor group
baremetal port list --conductor-group datacenter-east
Filtering by Shard
-------------------
.. code-block:: bash
# List nodes by shard
baremetal node list --shard batch-a
# Get shard distribution
baremetal shard list
# Find nodes without a shard assignment
baremetal node list --unsharded
Combined Filtering Within Deployments
--------------------------------------
.. code-block:: bash
# Within a single deployment, filter by conductor group and shard
baremetal node list --conductor-group datacenter-1 --shard maintenance-batch-a
# Set both conductor group and shard on a node
baremetal node set --conductor-group datacenter-east --shard batch-a <node-uuid>
# Get overview of resource distribution
baremetal shard list
baremetal conductor list
Best Practices
==============
Deployment Strategy Planning
----------------------------
1. **Assess isolation requirements**: Determine if you need complete service separation
2. **Plan geographic distribution**: Use multiple deployments for true regional separation
3. **Design conductor groups**: Align with physical/network boundaries
4. **Implement shard strategy**: Plan for operational efficiency
5. **Configure Nova appropriately**: Match Nova compute services to your architecture
Operational Considerations
--------------------------
**Multiple Deployments**:
* Maintain consistent tooling across deployments
* Plan for cross-deployment migrations if needed
* Monitor each deployment independently
* Coordinate upgrade schedules
**Within Deployments**:
* Monitor conductor distribution: ``baremetal shard list``
* Ensure conductor redundancy per group
* Align network topology with conductor groups
* Automate shard management for balance
**Nova Integration**:
* Plan compute service distribution across deployments
* Monitor nova-compute to Ironic node ratios
* Test failover scenarios between compute services
Naming Conventions
------------------
Naming patterns can be defined by the infrastructure operator and below
are some basic suggestions which may be relevant based upon operational
requirements.
**Conductor Groups**:
* Geographic: ``datacenter-east``, ``region-us-west``, ``rack-01``
* Hardware: ``dell-servers``, ``hpe-gen10``, ``gpu-nodes``
* Network: ``vlan-100``, ``isolated-network``
**Shards**:
* Operational: ``maintenance-batch-1``, ``upgrade-group-a``
* Size-based: ``small-nodes``, ``large-memory``
* Temporal: ``weekend-maintenance``, ``business-hours``
Decision Matrix
---------------
Choose your approach based on requirements:
+-------------------------+-------------------+-----------------+---------------+
| **Requirement** | **Multiple | **Conductor | **Shards** |
| | Deployments** | **Groups** | |
+=========================+===================+=================+===============+
| Complete isolation | ✓ Best | ✓ Good | ✗ No |
+-------------------------+-------------------+-----------------+---------------+
| Independent upgrades | ✓ Complete | ✓ Partial | ✗ No |
+-------------------------+-------------------+-----------------+---------------+
| Geographic separation | ✓ Best | ✓ Good | ✗ No |
+-------------------------+-------------------+-----------------+---------------+
| Operational scaling | ✗ Overhead | ✓ Good | ✓ Best |
+-------------------------+-------------------+-----------------+---------------+
| Resource efficiency | ✗ Lower | ✓ Good | ✓ Best |
+-------------------------+-------------------+-----------------+---------------+
Troubleshooting
===============
Multiple Deployment Issues
---------------------------
**Connectivity Problems**:
.. code-block:: bash
# Test connectivity to each deployment
baremetal --os-endpoint-override https://ironic-east.example.com node list
baremetal --os-endpoint-override https://ironic-west.example.com node list
**Nova Configuration Issues**:
.. code-block:: bash
# Check Nova compute service registration
openstack compute service list --service nova-compute
# Verify Nova can reach Ironic
grep -i ironic /var/log/nova/nova-compute.log
**Cross-Deployment Node Migration**:
.. code-block:: bash
# Export node data from source deployment
baremetal node show --fields all <node-uuid>
# Import to destination deployment (manual process)
# Note: Requires careful planning and may need custom tooling
Common Issues Within Deployments
---------------------------------
**Orphaned nodes**: Nodes without matching conductor groups cannot be managed
.. code-block:: bash
# Find nodes without conductor groups
baremetal node list --conductor-group ""
# List available conductor groups
baremetal conductor list
**Unbalanced shards**: Monitor node distribution across shards
.. code-block:: bash
# Check shard distribution
baremetal shard list
# Find heavily loaded shards
baremetal node list --shard <shard-name> | wc -l
**Missing conductor groups**: Ensure all groups have active conductors
.. code-block:: bash
# Check conductor status
baremetal conductor list
# Verify conductor group configuration
# Check ironic.conf [conductor] conductor_group setting
Migration Scenarios
-------------------
**Moving nodes between conductor groups**:
.. code-block:: bash
# Move node to different conductor group
baremetal node set --conductor-group new-group <node-uuid>
**Reassigning shards**:
.. code-block:: bash
# Change node shard assignment
baremetal node set --shard new-shard <node-uuid>
# Remove shard assignment
baremetal node unset --shard <node-uuid>
.. warning::
Shards should never be changed once a nova-compute service has
identified a node in Ironic. Changing a shard at this point is
an unsupported action. As such, Ironic's API RBAC policy restricts
these actions to a "System-Scoped Admin" user. Normal Admin users
are denied this capability due the restriction and requirement
on the nova-compute side of the consumption of shards.
See Also
========
* :doc:`conductor-groups` - Detailed conductor group configuration
* :doc:`networking` - Physical network considerations
* :doc:`../install/refarch/index` - Reference architectures
* :doc:`multitenancy` - Multi-tenant deployments
* :doc:`tuning` - Performance tuning considerations
* `Nova Ironic Driver Documentation <https://github.com/openstack/nova/blob/master/doc/source/admin/configuration/hypervisor-ironic.rst>`_
* `Nova Ironic Configuration Options <https://github.com/openstack/nova/blob/master/nova/conf/ironic.py>`_

View file

@ -55,9 +55,9 @@ To retrieve the cached BIOS configuration from a specified node::
BIOS settings are cached on each node cleaning operation or when settings
have been applied successfully via BIOS cleaning steps. The return of above
command is a table of the last cached BIOS settings from the specified node.
If ``-f json`` is added as a suffix to the above command, it returns BIOS
settings as following::
command is a table of last cached BIOS settings from specified node.
If ``-f json`` is added as suffix to above command, it returns BIOS settings
as following::
[
{
@ -81,8 +81,8 @@ To get a specified BIOS setting for a node::
$ baremetal node bios setting show <node> <setting-name>
If ``-f json`` is added as a suffix to the above command, it returns BIOS
settings as following::
If ``-f json`` is added as suffix to above command, it returns BIOS settings
as following::
{
"setting name":

View file

@ -7,7 +7,7 @@ Boot From Volume
Overview
========
The Bare Metal service supports booting from a Cinder iSCSI volume as of the
Pike release. This guide will primarily deal with this use case but will be
Pike release. This guide will primarily deal with this use case, but will be
updated as more paths for booting from a volume, such as FCoE, are introduced.
The boot from volume is supported on both legacy BIOS and
@ -25,12 +25,12 @@ the node OR the iPXE boot templates such that the node CAN be booted.
:width: 100%
In this example, the boot interface does the heavy lifting. For drivers the
``irmc`` and ``ilo`` hardware types with hardware type-specific boot
interfaces, they are able to signal via an out-of-band mechanism to the
``irmc`` and ``ilo`` hardware types with hardware type specific boot
interfaces, they are able to signal via an out of band mechanism to the
baremetal node's BMC that the integrated iSCSI initiators are to connect
to the supplied volume target information.
In most hardware, this would be the network cards of the machine.
In most hardware this would be the network cards of the machine.
In the case of the ``ipxe`` boot interface, templates are created on disk
which point to the iscsi target information that was either submitted
@ -39,7 +39,7 @@ requested as the baremetal's boot from volume disk upon requesting the
instance.
In terms of network access, both interface methods require connectivity
to the iscsi target. In the vendor driver-specific path, additional network
to the iscsi target. In the vendor driver specific path, additional network
configuration options may be available to allow separation of standard
network traffic and instance network traffic. In the iPXE case, this is
not possible as the OS userspace re-configures the iSCSI connection
@ -47,7 +47,7 @@ after detection inside the OS ramdisk boot.
An iPXE user *may* be able to leverage multiple VIFs, one specifically
set to be set with ``pxe_enabled`` to handle the initial instance boot
and back-end storage traffic whereas external-facing network traffic
and back-end storage traffic where as external facing network traffic
occurs on a different interface. This is a common pattern in iSCSI
based deployments in the physical realm.
@ -69,7 +69,7 @@ Currently booting from a volume requires:
Conductor Configuration
=======================
In ironic.conf, you can specify a list of enabled storage interfaces. Check
:oslo.config:option:`DEFAULT.enabled_storage_interfaces` in your ironic.conf to ensure that
``[DEFAULT]enabled_storage_interfaces`` in your ironic.conf to ensure that
your desired interface is enabled. For example, to enable the ``cinder`` and
``noop`` storage interfaces::
@ -77,7 +77,7 @@ your desired interface is enabled. For example, to enable the ``cinder`` and
enabled_storage_interfaces = cinder,noop
If you want to specify a default storage interface rather than setting the
storage interface on a per node basis, set :oslo.config:option:`DEFAULT.default_storage_interface`
storage interface on a per node basis, set ``[DEFAULT]default_storage_interface``
in ironic.conf. The ``default_storage_interface`` will be used for any node that
doesn't have a storage interface defined.
@ -95,14 +95,6 @@ on an existing node::
A default storage interface can be specified in ironic.conf. See the
`Conductor Configuration`_ section for details.
The storage interface is responsible for managing the mapping state of the
volume to the host. If some changes need to be communicated from Cinder and
then updated for Ironic to become aware of them, such as a change in iSCSI
credentials, then the act of powering baremetal node off via Ironic's API
will trigger these values to be updated automatically as the ``cinder``
storage interface resets the volume attachments with power actions to
ensure the latest information is used for each boot sequence.
iSCSI Configuration
-------------------
In order for a bare metal node to boot from an iSCSI volume, the ``iscsi_boot``
@ -148,13 +140,13 @@ Use without the Compute Service
-------------------------------
As discussed in other sections, the Bare Metal service has a concept of a
``connector`` that is used to represent an interface that is intended to
`connector` that is used to represent an interface that is intended to
be utilized to attach the remote volume.
In addition to the connectors, we have a concept of a ``target`` that can be
In addition to the connectors, we have a concept of a `target` that can be
defined via the API. While a user of this feature through the Compute
service would automatically have a new target record created for them,
it is not explicitly required and can be performed manually.
it is not explicitly required, and can be performed manually.
A target record can be created using a command similar to the example below::
@ -184,7 +176,7 @@ the node should or could boot from a remote volume.
It must be noted that minimal configuration or value validation occurs
with the ``external`` storage interface. The ``cinder`` storage interface
contains more extensive validation, that is likely unnecessary in a
contains more extensive validation, that is likely un-necessary in a
``external`` scenario.
Setting the external storage interface::
@ -236,7 +228,7 @@ contain support for multi-attach volumes.
When support for storage interfaces was added to the Bare Metal service,
specifically for the ``cinder`` storage interface, the concept of volume
multi-attach was accounted for, however has not been fully tested,
and is unlikely to be fully tested until there is a Compute service integration
and is unlikely to be fully tested until there is Compute service integration
as well as volume driver support.
The data model for storage of volume targets in the Bare Metal service

View file

@ -4,7 +4,7 @@ Building images for Windows
---------------------------
We can use ``New-WindowsOnlineImage`` in `windows-openstack-imaging-tools`_
tool as an option to create Windows images (whole disk images) corresponding
boot modes which will support Windows NIC Teaming. And allow the
boot modes which will support for Windows NIC Teaming. And allow the
utilization of link aggregation when the instance is spawned on hardware
servers (Bare metals).
@ -16,26 +16,27 @@ Requirements:
``PowerShell`` version >=4 supported,
``Windows Assessment and Deployment Kit``,
in short ``Windows ADK``.
* The Windows Server compatible drivers.
* The windows Server compatible drivers.
* Working git environment.
Preparation:
~~~~~~~~~~~~
* Download a Windows Server 2012R2/ 2016 installation ISO.
* Install Windows Server 2012R2/ 2016 OS on the workstation PC along with
* Install Windows Server 2012R2/ 2016 OS on workstation PC along with
following feature:
- Enable Hyper-V virtualization.
- Install PowerShell 4.0.
- Install Git environment & import git proxy (if you have).
- Create a new ``Path`` in the Microsoft Windows Server Operating System which
- Install Git environment & import git proxy (if have).
- Create new ``Path`` in Microsoft Windows Server Operating System which
support for submodule update via ``git submodule update init`` command::
- Variable name: Path
- Variable value: C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\Git\bin
- Rename the virtual switch name in Windows Server 2012R2/ 2016 in
``Virtual Switch Manager`` into ``external``.
- Rename virtual switch name in Windows Server 2012R2/ 2016 in
``Virtual Switch Manager`` into `external`.
Implementation:
~~~~~~~~~~~~~~~
@ -55,7 +56,7 @@ Implementation:
git clone https://github.com/cloudbase/windows-openstack-imaging-tools.git
* ``Step 5``: Create & running script ``create-windows-cloud-image.ps1``:
* ``Step 5``: Create & running script `create-windows-cloud-image.ps1`:
.. code-block:: console
@ -84,7 +85,7 @@ Implementation:
.. note::
We can change ``SizeBytes``, ``CpuCores``, and ``Memory`` depending on requirements.
We can change ``SizeBytes``, ``CpuCores`` and ``Memory`` depending on requirements.
.. _`example_windows_images`: https://github.com/cloudbase/windows-openstack-imaging-tools/blob/master/Examples
.. _`windows-openstack-imaging-tools`: https://github.com/cloudbase/windows-openstack-imaging-tools

View file

@ -1,10 +1,3 @@
.. meta::
:description: Automated cleaning and preparation of bare metal nodes in Ironic. Security wiping, hardware configuration, and node lifecycle management.
:keywords: node cleaning, automated cleaning, security wiping, hardware preparation, node lifecycle, tenant isolation, data security
:author: OpenStack Ironic Team
:robots: index, follow
:audience: security engineers, system administrators
.. _cleaning:
=============
@ -27,10 +20,16 @@ one workload to another.
Automated cleaning
==================
When hardware is recycled from one workload to another, Ironic performs
When hardware is recycled from one workload to another, ironic performs
automated cleaning on the node to ensure it's ready for another workload. This
ensures the tenant will get a consistent bare metal node deployed every time.
Ironic implements automated cleaning by collecting a list of cleaning steps
to perform on a node from the Power, Deploy, Management, BIOS, and RAID
interfaces of the driver assigned to the node. These steps are then ordered by
priority and executed on the node when the node is moved to ``cleaning`` state,
if automated cleaning is enabled.
With automated cleaning, nodes move to ``cleaning`` state when moving from
``active`` -> ``available`` state (when the hardware is recycled from one
workload to another). Nodes also traverse cleaning when going from
@ -38,6 +37,7 @@ workload to another). Nodes also traverse cleaning when going from
assigned to the nodes). For a full understanding of all state transitions
into cleaning, please see :ref:`states`.
Ironic added support for automated cleaning in the Kilo release.
.. _enabling-cleaning:
@ -52,7 +52,7 @@ To enable automated cleaning, ensure that your ironic.conf is set as follows:
automated_clean=true
This will enable the default set of cleaning steps, based on your hardware and
Ironic hardware types used for nodes. This includes, by default, erasing all
ironic hardware types used for nodes. This includes, by default, erasing all
of the previous tenant's data.
You may also need to configure a `Cleaning Network`_.
@ -60,231 +60,72 @@ You may also need to configure a `Cleaning Network`_.
Cleaning steps
--------------
The way cleaning steps are determined depends on the value of
:oslo.config:option:`conductor.automated_cleaning_step_source`:
Cleaning steps used for automated cleaning are ordered from higher to lower
priority, where a larger integer is a higher priority. In case of a conflict
between priorities across interfaces, the following resolution order is used:
Power, Management, Deploy, BIOS, and RAID interfaces.
**Autogenerated cleaning steps** ('autogenerated')
This is the default mode of Ironic automated cleaning and provides the
original Ironic behavior implemented originally in Kilo.
You can skip a cleaning step by setting the priority for that cleaning step
to zero or 'None'.
Steps are collected from hardware interfaces and ordered from higher to
lower priority, where a larger integer is a higher priority. In case of
a conflict between priorities across interfaces, the following resolution
order is used: Power, Management, Deploy, BIOS, and RAID interfaces.
You can reorder the cleaning steps by modifying the integer priorities of the
cleaning steps.
You can skip a cleaning step by setting the priority for that cleaning
step to zero or ``None``. You can reorder the cleaning steps by modifying
the integer priorities of the cleaning steps.
**Runbook-based cleaning steps** ('runbook')
When using :ref:`runbooks` for automated cleaning, the exact steps and
their order are defined in the runbook. Priority-based ordering does not
apply; steps execute in the order specified in the runbook.
If there is not a runbook assigned to perform cleaning on the node, and
automated_cleaning is enabled, the machine will fail to clean and go into
a ``clean failed`` state.
**Hybrid** ('hybrid')
This uses a runbook-based cleaning method if a cleaning runbook is
configured for the node being cleaned. In this mode, if there is not a
runbook configured for cleaning Ironic will fall-back to autogenerating
cleaning steps.
See `How do I change the priority of a cleaning step?`_ for more information on
changing the priority of an autogenerated cleaning step.
See :ref:`runbook-cleaning` for full details on configuring cleaning
runbooks.
See `How do I change the priority of a cleaning step?`_ for more information.
Storage cleaning options
------------------------
.. warning::
Ironic's storage cleaning options by default will remove data from the disk
permanently during automated cleaning.
Clean steps specific to storage are ``erase_devices``,
``erase_devices_metadata`` and (added in Yoga) ``erase_devices_express``.
``erase_devices`` aims to ensure that the data is removed in the most secure
way available. On devices that support hardware-assisted secure erasure
(many NVMe and some ATA drives), this is the preferred option. If
way available. On devices that support hardware assisted secure erasure
(many NVMe and some ATA drives) this is the preferred option. If
hardware-assisted secure erasure is not available and if
:oslo.config:option:`deploy.continue_if_disk_secure_erase_fails` is set to
``True``, cleaning will fall back to using ``shred`` to overwrite the
contents of the device. By default, if ``erase_devices`` is enabled
and Ironic is unable to erase the device, cleaning will fail to ensure
data security.
.. note::
``erase_devices`` may take a very long time (hours or even days) to
complete, unless fast, hardware-assisted data erasure is supported by
all the devices in a system.
``[deploy]/continue_if_disk_secure_erase_fails`` is set to ``True``, cleaning
will fall back to using ``shred`` to overwrite the contents of the device.
Otherwise cleaning will fail. It is important to note that ``erase_devices``
may take a very long time (hours or even days) to complete, unless fast,
hardware assisted data erasure is supported by all the devices in a system.
Generally, it is very difficult (if possible at all) to recover data after
performing cleaning with ``erase_devices``.
``erase_devices_metadata`` clean step doesn't provide as strong assurance
of irreversible destruction of data as ``erase_devices``. However, it has the
advantage of a reasonably quick runtime (seconds to minutes). It operates by
destroying the metadata of the storage device without erasing every bit of the
data itself. Attempts to restore data after running
destroying metadata of the storage device without erasing every bit of the
data itself. Attempts of restoring data after running
``erase_devices_metadata`` may be successful but would certainly require
relevant expertise and specialized tools.
Lastly, ``erase_devices_express`` combines some of the perks of both
``erase_devices`` and ``erase_devices_metadata``. It attempts to utilize
hardware-assisted data erasure features if available (currently only NVMe
devices are supported). In case hardware-assisted data erasure is not
hardware assisted data erasure features if available (currently only NVMe
devices are supported). In case hardware-asssisted data erasure is not
available, it falls back to metadata erasure for the device (which is
identical to ``erase_devices_metadata``). It can be considered a
time-optimized mode of storage cleaning, aiming to perform as thorough
time optimized mode of storage cleaning, aiming to perform as thorough
data erasure as it is possible within a short period of time.
This clean step is particularly well suited for environments with hybrid
NVMe-HDD storage configuration as it allows fast and secure erasure of data
stored on NVMes combined with equally fast but more basic metadata-based
erasure of data on commodity HDDs.
By default, Ironic will use ``erase_devices_metadata`` early in cleaning
for reliability (ensuring a node cannot reboot into its old workload) and
``erase_devices`` later in cleaning to securely erase the drive;
``erase_devices_express`` is disabled.
Operators can use :oslo.config:option:`deploy.erase_devices_priority` and
:oslo.config:option:`deploy.erase_devices_metadata_priority` to change the
priorities of the default device erase methods or disable them entirely
by setting ``0``. Other cleaning steps can have their priority modified
via the :oslo.config:option:`conductor.clean_step_priority_override` option.
For example, the configuration snippet below disables
``erase_devices_metadata`` and ``erase_devices`` and instead performs an
``erase_devices_express`` erase step.
erasure of data on HDDs.
``erase_devices_express`` is disabled by default. In order to use it, the
following configuration is recommended.
.. code-block:: ini
[deploy]
erase_devices_priority=0
erase_devices_metadata_priority=0
[conductor]
clean_step_priority_override=deploy.erase_devices_express:95
[deploy]/erase_devices_priority=0
[deploy]/erase_devices_metadata_priority=0
[conductor]/clean_step_priority_override=deploy.erase_devices_express:5
This ensures that ``erase_devices`` and ``erase_devices_metadata`` are
disabled so that storage is not cleaned twice and then assigns a non-zero
priority to ``erase_devices_express``, hence enabling it. Any non-zero
priority specified in the priority override will work; larger values will
cause the disk erasure to run earlier in the cleaning process if multiple
steps are enabled.
Other configurations that can modify how Ironic erases disks are below.
This list may not be comprehensive. Please review ironic.conf.sample
(linked) for more details:
* :oslo.config:option:`deploy.enable_ata_secure_erase`, default ``True``
* :oslo.config:option:`deploy.enable_nvme_secure_erase`, default ``True``
* :oslo.config:option:`deploy.shred_random_overwrite_iterations`, default ``1``
* :oslo.config:option:`deploy.shred_final_overwrite_with_zeros`, default ``True``
* :oslo.config:option:`deploy.disk_erasure_concurrency`, default ``4``
.. warning::
Ironic automated cleaning is defaulted to a secure configuration. You should
not modify settings related to it unless you have special hardware needs
or a unique use case. Misconfigurations can lead to data exposure
vulnerabilities.
.. _runbook-cleaning:
Configuring automated cleaning with runbooks
--------------------------------------------
Starting with the 2025.2/Flamingo release, operators can configure Ironic to
use runbooks for automated cleaning instead of relying on autogenerated steps.
This provides more control over the cleaning process and ensures consistency
across nodes.
.. warning::
When using runbooks for automated cleaning, ensure they include appropriate
security measures such as disk erasure. Ironic does not validate that a
runbook performs disk cleaning operations or any other specific cleaning
step.
**Trait matching**
As always with runbooks, you must have a trait on the node which matches the
runbook name. This allows a fail-safe to prevent dangerous, hardware-specific
cleaning steps from running on incompatible hardware.
You can disable this check by setting
:oslo.config:option:`conductor.automated_cleaning_runbook_validate_traits` to
False.
.. code-block:: bash
openstack baremetal node add trait myNode CUSTOM_RB_EXAMPLE
**Configure cleaning runbooks**
Runbooks can be configured at three levels (from most to least specific):
1. **Per-node**:
Operators can set a per-node cleaning runbook override using the following
command:
.. code-block:: bash
openstack baremetal node set myNode --driver-info cleaning_runbook=CUSTOM_RB_EXAMPLE
.. warning::
Customizing cleaning per node requires setting
:oslo.config:option:`conductor.automated_cleaning_runbook_from_node`
to True.
Enabling node-level runbooks allows node owners to override cleaning
behavior via use a noop runbook. Only enable this in trusted
environments.
2. **Per-resource-class**:
Operators can set a runbook per resource_class using
:oslo.config:option:`conductor.automated_cleaning_runbook_by_resource_class`
to build a list of mappings of resource_class to runbook. These runbooks are
used to clean any node in that resource class that do not have a node-level
override.
In this example, the large resource_class uses ``CUSTOM_FULL_CLEAN`` and the
small resource_class uses ``CUSTOM_QUICK_CLEAN``. Nodes in those resource
classes would still be required to have traits matching the runbook name.
.. code-block:: ini
[conductor]
automated_cleaning_runbook_by_resource_class = large:CUSTOM_FULL_CLEAN,small:CUSTOM_QUICK_CLEAN
3. **Global default**:
Operators can also configure a global default, which is used for nodes which
do not already have a more specific runbook configured, such as node-level
overrides or a resource_class mapping.
In this example, any node cleaned in the environment would use
``CUSTOM_DEFAULT_CLEAN``. Unless trait mapping is disabled, all nodes would
be required to have a trait also named ``CUSTOM_DEFAULT_CLEAN`` to
successfully clean.
.. code-block:: ini
[conductor]
automated_cleaning_runbook = CUSTOM_DEFAULT_CLEAN
**Create and assign runbooks**
Create a runbook with the necessary cleaning steps::
baremetal runbook create --name CUSTOM_SECURE_ERASE \
--steps '[{"interface": "deploy", "step": "erase_devices", "args": {}, "order": 0}]'
Ensure nodes have the matching trait::
baremetal node add trait <node> CUSTOM_SECURE_ERASE
priority specified in the priority override will work.
Also `[deploy]/enable_nvme_secure_erase` should not be disabled (it is on by default).
.. show-steps::
:phase: cleaning
@ -294,7 +135,7 @@ Ensure nodes have the matching trait::
Manual cleaning
===============
``Manual cleaning`` is typically used to handle long-running, manual, or
``Manual cleaning`` is typically used to handle long running, manual, or
destructive tasks that an operator wishes to perform either before the first
workload has been assigned to a node or between workloads. When initiating a
manual clean, the operator specifies the cleaning steps to be performed.
@ -331,13 +172,13 @@ dictionary (JSON), in the form::
{
"interface": "<interface>",
"step": "<name of cleaning step>",
"args": {"<arg1>": "<value1>", ..., "<argn>": "<valuen>"}
"args": {"<arg1>": "<value1>", ..., "<argn>": <valuen>}
}
The 'interface' and 'step' keys are required for all steps. If a cleaning step
method takes keyword arguments, the 'args' key may be specified. It
is a dictionary of keyword variable arguments, with each keyword-argument entry
being ``<name>: <value>``.
being <name>: <value>.
If any step is missing a required keyword argument, manual cleaning will not be
performed and the node will be put in ``clean failed`` provision state with an
@ -367,31 +208,7 @@ In the above example, the node's RAID interface would configure hardware
RAID without non-root volumes, and then all devices would be erased
(in that order).
An example is setting the BMC clock using the Redfish management interface::
{
"target": "clean",
"clean_steps": [{
"interface": "management",
"step": "set_bmc_clock",
"args": {"target_datetime": "2025-07-22T12:34:56+00:00"}
}]
}
This step requires the node to use the ``redfish`` management interface
and that the Redfish service exposes the ``DateTime`` and ``DateTimeLocalOffset``
fields under the Manager Resource.
Alternatively, you can specify a runbook instead of clean_steps::
{
"target":"clean",
"runbook": "<runbook_name_or_uuid>"
}
The specified runbook must match one of the node's traits to be used.
Starting manual cleaning via "openstack baremetal" CLI
Starting manual cleaning via "openstack metal" CLI
------------------------------------------------------
Manual cleaning is available via the ``baremetal node clean``
@ -401,7 +218,7 @@ The argument ``--clean-steps`` must be specified. Its value is one of:
- a JSON string
- path to a JSON file whose contents are passed to the API
- ``-`` to read from stdin. This allows piping in the clean steps.
- '-', to read from stdin. This allows piping in the clean steps.
Using '-' to signify stdin is common in Unix utilities.
The following examples assume that the Bare Metal API version was set via
@ -428,22 +245,6 @@ Or with stdin::
cat my-clean-steps.txt | baremetal node clean <node> \
--clean-steps -
Runbooks for Manual Cleaning
----------------------------
Instead of passing a list of clean steps, operators can now use runbooks.
Runbooks are curated lists of steps that can be associated with nodes via
traits which simplifies the process of performing consistent cleaning
operations across similar nodes.
To use a runbook for manual cleaning::
baremetal node clean <node> --runbook <runbook_name_or_uuid>
Runbooks must be created and associated with nodes beforehand. Only runbooks
that match the node's traits can be used for cleaning that node.
For more information on the runbook API usage, see :ref:`runbooks`.
Cleaning Network
================
@ -462,7 +263,7 @@ out-of-band. Ironic supports using both methods to clean a node.
In-band
-------
In-band steps are performed by Ironic making API calls to a ramdisk running
In-band steps are performed by ironic making API calls to a ramdisk running
on the node using a deploy interface. Currently, all the deploy interfaces
support in-band cleaning. By default, ironic-python-agent ships with a minimal
cleaning configuration, only erasing disks. However, you can add your own
@ -472,7 +273,7 @@ Hardware Manager.
Out-of-band
-----------
Out-of-band are actions performed by your management controller, such as IPMI,
iLO, or DRAC. Out-of-band steps will be performed by Ironic using a power or
iLO, or DRAC. Out-of-band steps will be performed by ironic using a power or
management interface. Which steps are performed depends on the hardware type
and hardware itself.
@ -499,14 +300,12 @@ order.
How do I skip a cleaning step?
------------------------------
For automated cleaning, cleaning steps with a priority of zero or ``None`` are skipped.
For automated cleaning, cleaning steps with a priority of 0 or None are skipped.
.. _clean_step_priority:
How do I change the priority of a cleaning step?
------------------------------------------------
For manual cleaning, or runbook-based cleaning, specify the cleaning steps in
the desired order.
For manual cleaning, specify the cleaning steps in the desired order.
For automated cleaning, it depends on whether the cleaning steps are
out-of-band or in-band.
@ -515,9 +314,46 @@ Most out-of-band cleaning steps have an explicit configuration option for
priority.
Changing the priority of an in-band (ironic-python-agent) cleaning step
requires use of :oslo.config:option:`conductor.clean_step_priority_override`,
a configuration option that allows specifying the priority of each step using
multiple configuration values:
requires use of a custom HardwareManager. The only exception is
``erase_devices``, which can have its priority set in ironic.conf. For instance,
to disable erase_devices, you'd set the following configuration option::
[deploy]
erase_devices_priority=0
To enable/disable the in-band disk erase using ``ilo`` hardware type, use the
following configuration option::
[ilo]
clean_priority_erase_devices=0
The generic hardware manager first identifies whether a device is an NVMe
drive or an ATA drive so that it can attempt a platform-specific secure erase
method. In case of NVMe drives, it tries to perform a secure format operation
by using the ``nvme-cli`` utility. This behavior can be controlled using
the following configuration option (by default it is set to True)::
[deploy]
enable_nvme_secure_erase=True
In case of ATA drives, it tries to perform ATA disk erase by using the
``hdparm`` utility.
If neither method is supported, it performs software based disk erase using
the ``shred`` utility. By default, the number of iterations performed
by ``shred`` for software based disk erase is 1. To configure the number of
iterations, use the following configuration option::
[deploy]
erase_devices_iterations=1
Overriding step priority
------------------------
``[conductor]clean_step_priority_override`` is a new configuration option
which allows specifying priority of each step using multiple configuration
values:
.. code-block:: ini
@ -543,8 +379,8 @@ the node failed before going into ``clean failed`` state.
Should I disable automated cleaning?
------------------------------------
Automated cleaning is recommended for Ironic deployments, however, there are
some tradeoffs to having it enabled. For instance, Ironic cannot deploy a new
Automated cleaning is recommended for ironic deployments, however, there are
some tradeoffs to having it enabled. For instance, ironic cannot deploy a new
instance to a node that is currently cleaning, and cleaning can be a time
consuming process. To mitigate this, we suggest using NVMe drives with support
for NVMe Secure Erase (based on ``nvme-cli`` format command) or ATA drives
@ -556,7 +392,7 @@ Why can't I power on/off a node while it's cleaning?
----------------------------------------------------
During cleaning, nodes may be performing actions that shouldn't be
interrupted, such as BIOS or Firmware updates. As a result, operators are
forbidden from changing the power state via the Ironic API while a node is
forbidden from changing power state via the ironic API while a node is
cleaning.
Advanced topics
@ -571,7 +407,7 @@ account child nodes. Mainly, the concept of executing clean steps in relation
to child nodes.
In this context, a child node is primarily intended to be an embedded device
with its own management controller. For example "SmartNIC's" or Data
with it's own management controller. For example "SmartNIC's" or Data
Processing Units (DPUs) which may have their own management controller and
power control.
@ -582,9 +418,9 @@ The relationship between a parent node and a child node is established on the ch
Child Node Clean Step Execution
-------------------------------
You can execute steps that perform actions on child nodes. For example,
You can execute steps which perform actions on child nodes. For example,
turn them on (via step ``power_on``), off (via step ``power_off``), or to
signal a BMC-controlled reboot (via step ``reboot``).
signal a BMC controlled reboot (via step ``reboot``).
For example, if you need to explicitly power off child node power, before
performing another step, you can articulate it with a step such as::
@ -609,37 +445,20 @@ power will be turned off via the management interface. Afterwards, the
While the deployment step framework also supports the
``execute_on_child_nodes`` and ``limit_child_node_execution`` parameters,
all of the step frameworks have a fundamental limitation in that child node
step execution is intended for synchronous actions which do not rely upon
step execution is indended for syncronous actions which do not rely upon
the ``ironic-python-agent`` running on any child nodes. This constraint may
be changed in the future.
Power Management with Child Nodes
---------------------------------
The mix of child nodes and parent nodes has special power considerations,
and these devices are evolving in the industry. That being said, the Ironic
project has taken an approach of explicitly attempting to "power on" any
parent node when a request comes in to "power on" a child node. This can be
bypassed by setting a ``driver_info`` parameter ``has_dedicated_power_supply``
set to ``True``, in recognition that some hardware vendors are working on
supplying independent power to these classes of devices to meet their customer
use cases.
Similarly to the case of a "power on" request for a child node, when power
is requested to be turned off for a "parent node", Ironic will issue
"power off" commands for all child nodes unless the child node has the
``has_dedicated_power_supply`` option set in the node's ``driver_info`` field.
Troubleshooting
===============
If cleaning fails on a node, the node will be put into ``clean failed`` state.
If the failure happens while running a clean step, the node is also placed in
maintenance mode to prevent Ironic from taking actions on the node. The
maintenance mode to prevent ironic from taking actions on the node. The
operator should validate that no permanent damage has been done to the
node and that no processes are still running on it before removing the
maintenance mode.
node and no processes are still running on it before removing the maintenance
mode.
.. note:: Older versions of Ironic may put the node to maintenance even when
.. note:: Older versions of ironic may put the node to maintenance even when
no clean step has been running.
Nodes in ``clean failed`` will not be powered off, as the node might be in a
@ -647,7 +466,7 @@ state such that powering it off could damage the node or remove useful
information about the nature of the cleaning failure.
A ``clean failed`` node can be moved to ``manageable`` state, where it cannot
be scheduled by Nova and you can safely attempt to fix the node. To move a node
be scheduled by nova and you can safely attempt to fix the node. To move a node
from ``clean failed`` to ``manageable``::
baremetal node manage $node_ident
@ -655,19 +474,19 @@ from ``clean failed`` to ``manageable``::
You can now take actions on the node, such as replacing a bad disk drive.
Strategies for determining why a cleaning step failed include checking the
Ironic conductor logs, viewing logs on the still-running ironic-python-agent
ironic conductor logs, viewing logs on the still-running ironic-python-agent
(if an in-band step failed), or performing general hardware troubleshooting on
the node.
When the node is repaired, you can move the node back to ``available`` state,
to allow it to be scheduled by Nova.
to allow it to be scheduled by nova.
::
# First, move it out of maintenance mode
baremetal node maintenance unset $node_ident
# Now, make the node available for scheduling by Nova
# Now, make the node available for scheduling by nova
baremetal node provide $node_ident
The node will begin automated cleaning from the start, and move to

View file

@ -4,25 +4,16 @@
Conductor Groups
================
.. seealso::
For a complete guide on achieving availability zone functionality,
see :doc:`availability-zones`.
Overview
========
Conductor groups provide **physical resource partitioning** in Ironic,
similar to Nova's availability zones but focused on conductor-level management.
They work alongside :ref:`shards <availability-zones-shards>` to provide
complete resource isolation and operational scaling capabilities.
Large-scale operators tend to have needs that involve creating
well-defined and delineated resources. In some cases, these systems
may reside close by or in faraway locations. The reasoning may be simple
Large scale operators tend to have needs that involve creating
well defined and delinated resources. In some cases, these systems
may reside close by or in far away locations. Reasoning may be simple
or complex, and yet is only known to the deployer and operator of the
infrastructure.
A common case is the need for delineated high-availability domains
A common case is the need for delineated high availability domains
where it would be much more efficient to manage a datacenter in Antarctica
with a conductor in Antarctica, as opposed to a conductor in New York City.
@ -33,12 +24,12 @@ Starting in ironic 11.1, each node has a ``conductor_group`` field which
influences how the ironic conductor calculates (and thus allocates)
baremetal nodes under ironic's management. This calculation is performed
independently by each operating conductor and as such if a conductor has
a :oslo.config:option:`conductor.conductor_group` configuration option defined in its
``ironic.conf`` configuration file, the conductor will then be limited to
a ``[conductor]conductor_group`` configuration option defined in its
`ironic.conf` configuration file, the conductor will then be limited to
only managing nodes with a matching ``conductor_group`` string.
.. note::
Any conductor without a :oslo.config:option:`conductor.conductor_group` setting will
Any conductor without a ``[conductor]conductor_group`` setting will
only manage baremetal nodes without a ``conductor_group`` value set upon
node creation. If no such conductor is present when conductor groups are
configured, node creation will fail unless a ``conductor_group`` is
@ -46,18 +37,18 @@ only managing nodes with a matching ``conductor_group`` string.
.. warning::
Nodes without a ``conductor_group`` setting can only be managed when a
conductor exists that does not have a :oslo.config:option:`conductor.conductor_group`
conductor exists that does not have a ``[conductor]conductor_group``
defined. If all conductors have been migrated to use a conductor group,
such nodes are effectively "orphaned".
How to use
==========
A conductor group value may be any case-insensitive string up to 255
A conductor group value may be any case insensitive string up to 255
characters long which matches the ``^[a-zA-Z0-9_\-\.]*$`` regular
expression.
#. Set the :oslo.config:option:`conductor.conductor_group` option in ironic.conf
#. Set the ``[conductor]conductor_group`` option in ironic.conf
on one or more, but not all conductors::
[conductor]
@ -70,21 +61,6 @@ expression.
baremetal node set \
--conductor-group "OperatorDefinedString" <uuid>
#. As desired and as needed, the remaining conductors can be updated with
#. As desired and as needed, remaining conductors can be updated with
the first two steps. Please be mindful of the constraints covered
earlier in the document related to the ability to manage nodes.
Advanced Usage with Multiple Deployments
=========================================
Conductor groups work within a single Ironic deployment. For complete
service isolation across geographic regions or regulatory boundaries,
consider using :ref:`multiple Ironic deployments <availability-zones:Tier 1: Multiple Ironic Deployments>`
targeted by different Nova compute services.
See Also
========
* :doc:`availability-zones` - Complete availability zone strategy
* :doc:`networking` - Physical network considerations
* :doc:`../install/refarch/index` - Reference architectures
earlier in the document related to ability to manage nodes.

View file

@ -1,55 +1,15 @@
.. _console:
====================
Configuring Consoles
====================
=================================
Configuring Web or Serial Console
=================================
Overview
--------
There are three types of consoles which are available in Bare Metal service:
* (`Node graphical console`_) for a graphical console from a NoVNC web browser
* (`Node web console`_) a terminal available from a web browser
* (`Node serial console`_) for serial console support
Node graphical console
----------------------
Graphical console drivers require a configured and running ``ironic-novncproxy``
service. Each supported driver is described below.
redfish-graphical
~~~~~~~~~~~~~~~~~
A driver for a subset of Redfish hosts. Starting the console will start a
container which exposes a VNC server for ``ironic-novncproxy`` to attach to.
When attached, a browser will start which displays an HTML5 based console on
the following supported hosts:
* Dell iDRAC
* HPE iLO
* Supermicro
.. code-block:: ini
[DEFAULT]
enabled_hardware_types = redfish
enabled_console_interfaces = redfish-graphical,no-console
fake-graphical
~~~~~~~~~~~~~~~~~
A driver for demonstrating working graphical console infrastructure. Starting
the console will start a container which exposes a VNC server for
``ironic-novncproxy`` to attach to. When attached, a browser will start which
displays an animation.
.. code-block:: ini
[DEFAULT]
enabled_hardware_types = fake-hardware
enabled_console_interfaces = fake-graphical,no-console
There are two types of console which are available in Bare Metal service,
one is web console (`Node web console`_) which is available directly from web
browser, another is serial console (`Node serial console`_).
Node web console
----------------
@ -57,17 +17,15 @@ Node web console
The web console can be configured in Bare Metal service in the following way:
* Install shellinabox in ironic conductor node. For RHEL/CentOS, shellinabox package
is not present in base repositories, the user must enable EPEL repository, you can
find more from `FedoraProject page`_.
is not present in base repositories, user must enable EPEL repository, you can find
more from `FedoraProject page`_.
.. warning::
.. note::
Shell In A Box is considered abandoned by the Ironic community. The
original maintainer stopped maintaining the project and the project
was thus forked. The resulting
`fork <https://github.com/shellinabox/shellinabox>`_ has not received
updates in a number of years and is considered abandoned.
As such, shellinabox support has been deprecated by the Ironic community.
shellinabox is no longer maintained by the authorized author.
`This <https://github.com/shellinabox/shellinabox>`_ is a fork of the
project on GitHub that aims to continue with maintenance of the
shellinabox project.
Installation example:
@ -75,7 +33,7 @@ The web console can be configured in Bare Metal service in the following way:
sudo apt-get install shellinabox
RHEL/CentOS/Fedora::
RHEL8/CentOS8/Fedora::
sudo dnf install shellinabox
@ -90,7 +48,7 @@ The web console can be configured in Bare Metal service in the following way:
sudo apt-get install openssl
RHEL/CentOS/Fedora::
RHEL8/CentOS8/Fedora::
sudo dnf install openssl
@ -108,7 +66,7 @@ The web console can be configured in Bare Metal service in the following way:
* Customize the console section in the Bare Metal service configuration
file (/etc/ironic/ironic.conf), if you want to use SSL certificate in
shellinabox, you should specify ``terminal_cert_dir``.
For example::
for example::
[console]
@ -189,9 +147,9 @@ The web console can be configured in Bare Metal service in the following way:
| console_info | {u'url': u'http://<url>:<customized_port>', u'type': u'shellinabox'} |
+-----------------+----------------------------------------------------------------------+
You can open the web console using the above ``url`` through web browser. If
``console_enabled`` is ``false``, ``console_info`` is ``None``, web console is disabled.
If you want to launch the web console, see the ``Configure node web console`` part.
You can open web console using above ``url`` through web browser. If ``console_enabled`` is
``false``, ``console_info`` is ``None``, web console is disabled. If you want to launch web
console, see the ``Configure node web console`` part.
.. note::
@ -213,12 +171,7 @@ Node serial console
-------------------
Serial consoles for nodes are implemented using `socat`_. It is supported by
the ``ipmi``, ``irmc``, and ``redfish`` hardware types.
.. NOTE::
The use of the ``ipmitool-socat`` console interface on any hardware type
requires the ipmi connection parameters to be set into the ``driver_info``
filed on the node.
the ``ipmi`` and ``irmc`` hardware types.
Serial consoles can be configured in the Bare Metal service as follows:
@ -231,7 +184,7 @@ Serial consoles can be configured in the Bare Metal service as follows:
sudo apt-get install socat
RHEL/CentOS/Fedora::
RHEL8/CentOS8/Fedora::
sudo dnf install socat
@ -282,7 +235,7 @@ If ``console_enabled`` is ``false`` or ``console_info`` is ``None`` then
the serial console is disabled. If you want to launch serial console, see the
``Configure node console``.
The node serial console of the Bare Metal service is compatible with the
Node serial console of the Bare Metal service is compatible with the
serial console of the Compute service. Hence, serial consoles to
Bare Metal nodes can be seen and interacted with via the Dashboard service.
In order to achieve that, you need to follow the documentation for
@ -316,7 +269,7 @@ configuration, you may consider some settings below.
* The Compute service's caching feature may need to be enabled in order
to make the Bare Metal serial console work under a HA configuration.
Here is an example of a caching configuration in ``nova.conf``.
Here is an example of caching configuration in ``nova.conf``.
.. code-block:: ini

View file

@ -1,8 +0,0 @@
Dashboard Integration
---------------------
A plugin for the OpenStack Dashboard (horizon) service is under development.
Documentation for that can be found within the ironic-ui project.
* :ironic-ui-doc:`Dashboard (horizon) plugin <>`

View file

@ -4,7 +4,7 @@ Layer 3 or DHCP-less ramdisk booting
Booting nodes via PXE, while universally supported, suffers from one
disadvantage: it requires a direct L2 connectivity between the node and the
control plane for DHCP. Using virtual media it is possible to avoid not only
the unreliable TFTP protocol but DHCP altogether.
the unreliable TFTP protocol, but DHCP altogether.
When network data is provided for a node as explained below, the generated
virtual media ISO will also serve as a configdrive_, and the network data will
@ -42,8 +42,8 @@ When the Bare Metal service is running within OpenStack, no additional
configuration is required - the network configuration will be fetched from the
Network service.
Alternatively, the user can build and pass network configuration in the form
of a network_data_ JSON to a node via the ``network_data`` field. Node-based
Alternatively, the user can build and pass network configuration in form of
a network_data_ JSON to a node via the ``network_data`` field. Node-based
configuration takes precedence over the configuration generated by the
Network service and also works in standalone mode.
@ -79,7 +79,7 @@ An example network data:
.. note::
Some fields are redundant with the port information. We're looking into
simplifying the format, but currently, all these fields are mandatory.
simplifying the format, but currently all these fields are mandatory.
You'll need the deployed image to support network data, e.g. by pre-installing
cloud-init_ or Glean_ on it (most cloud images have the former). Then you can
@ -131,7 +131,7 @@ the service catalog or configured in the ``[service_catalog]`` section:
In case you need specific URLs for each node, you can use the
``driver_info[external_http_url]`` node property. When used it overrides the
:oslo.config:option:`deploy.http_url` and :oslo.config:option:`deploy.external_http_url` settings in the
``[deploy]http_url`` and ``[deploy]external_http_url`` settings in the
configuration file.
.. code-block:: bash

View file

@ -1,13 +1,6 @@
.. meta::
:description: Comprehensive guide to Ironic hardware drivers and interfaces. Configure support for IPMI, Redfish, vendor-specific management, and hardware types.
:keywords: ironic drivers, hardware interfaces, IPMI, redfish, hardware management, vendor drivers, boot interfaces, power management
:author: OpenStack Ironic Team
:robots: index, follow
:audience: system administrators, hardware engineers
===========================================================
Drivers, Hardware Types, and Hardware Interfaces for Ironic
===========================================================
===============================================
Drivers, Hardware Types and Hardware Interfaces
===============================================
Generic Interfaces
------------------
@ -24,6 +17,7 @@ Hardware Types
.. toctree::
:maxdepth: 1
drivers/ibmc
drivers/idrac
drivers/ilo
drivers/intel-ipmi
@ -31,6 +25,7 @@ Hardware Types
drivers/irmc
drivers/redfish
drivers/snmp
drivers/xclarity
drivers/fake
Changing Hardware Types and Interfaces
@ -47,7 +42,7 @@ Any hardware interfaces can be specified on enrollment as well::
baremetal node create --driver <hardware type> \
--deploy-interface direct --<other>-interface <other implementation>
For the remaining interfaces, the default value is assigned as described in
For the remaining interfaces the default value is assigned as described in
:ref:`hardware_interfaces_defaults`. Both the hardware type and the hardware
interfaces can be changed later via the node update API.
@ -76,7 +71,7 @@ not work::
This is because the ``fake-hardware`` hardware type defaults to ``fake``
implementations for some or all interfaces, but the ``ipmi`` hardware type is
incompatible with them. There are three ways to deal with this situation:
not compatible with them. There are three ways to deal with this situation:
#. Provide new values for all incompatible interfaces, for example::
@ -95,6 +90,9 @@ incompatible with them. There are three ways to deal with this situation:
--reset-management-interface \
--reset-power-interface
.. note:: This feature is available starting with ironic 11.1.0 (Rocky
series, API version 1.45).
#. Request resetting all interfaces to their new defaults::
baremetal node set test --driver ipmi --reset-interfaces
@ -104,6 +102,9 @@ incompatible with them. There are three ways to deal with this situation:
baremetal node set test --driver ipmi --reset-interfaces \
--deploy-interface direct
.. note:: This feature is available starting with ironic 11.1.0 (Rocky
series, API version 1.45).
.. _static-boot-order:
Static boot order configuration
@ -115,7 +116,7 @@ implementation with the ``ipmi`` and ``redfish`` hardware types. In this case
the Bare Metal service will not change the boot device for you, leaving
the pre-configured boot order.
For example, in the case of the :ref:`pxe-boot`:
For example, in case of the :ref:`pxe-boot`:
#. Via any available means configure the boot order on the node as follows:
@ -125,7 +126,7 @@ For example, in the case of the :ref:`pxe-boot`:
If it is not possible to limit network boot to only provisioning NIC,
make sure that no other DHCP/PXE servers are accessible by the node.
#. Boot from the hard drive.
#. Boot from hard drive.
#. Make sure the ``noop`` management interface is enabled, for example:
@ -138,3 +139,23 @@ For example, in the case of the :ref:`pxe-boot`:
#. Change the node to use the ``noop`` management interface::
baremetal node set <NODE> --management-interface noop
Unsupported drivers
-------------------
The following drivers were declared as unsupported in ironic Newton release
and as of Ocata release they are removed from ironic:
- AMT driver - available as part of ironic-staging-drivers_
- iBoot driver - available as part of ironic-staging-drivers_
- Wake-On-Lan driver - available as part of ironic-staging-drivers_
- Virtualbox drivers
- SeaMicro drivers
- MSFT OCS drivers
The SSH drivers were removed in the Pike release. Similar functionality can be
achieved either with VirtualBMC_ or using libvirt drivers from
ironic-staging-drivers_.
.. _ironic-staging-drivers: http://ironic-staging-drivers.readthedocs.io
.. _VirtualBMC: https://opendev.org/openstack/virtualbmc

View file

@ -115,7 +115,7 @@ Logging
Logging is implemented as custom Ansible callback module,
that makes use of ``oslo.log`` and ``oslo.config`` libraries
and can reuse logging configuration defined in the main ironic configuration
and can re-use logging configuration defined in the main ironic configuration
file to set logging for Ansible events, or use a separate file for this purpose.
It works best when ``journald`` support for logging is enabled.
@ -378,26 +378,26 @@ Those values are then accessible in your plays as well
passed inside this variable. Some extra notes and fields:
- ``mem_req`` is calculated from image size (if available) and config
option :oslo.config:option:`ansible.extra_memory`.
option ``[ansible]extra_memory``.
- if ``checksum`` is not in the form ``<hash-algo>:<hash-sum>``, hashing
algorithm is assumed to be ``md5`` (default in Glance).
- ``validate_certs`` - boolean (``yes/no``) flag that turns validating
image store SSL certificate on or off (default is 'yes').
Governed by :oslo.config:option:`ansible.image_store_insecure` option
Governed by ``[ansible]image_store_insecure`` option
in ironic configuration file.
- ``cafile`` - custom CA bundle to use for validating image store
SSL certificate.
Takes value of :oslo.config:option:`ansible.image_store_cafile` if that is defined.
Takes value of ``[ansible]image_store_cafile`` if that is defined.
Currently is not used by default playbooks, as Ansible has no way to
specify the custom CA bundle to use for single HTTPS actions,
however you can use this value in your custom playbooks to for example
upload and register this CA in the ramdisk at deploy time.
- ``client_cert`` - cert file for client-side SSL authentication.
Takes value of :oslo.config:option:`ansible.image_store_certfile` option if defined.
Takes value of ``[ansible]image_store_certfile`` option if defined.
Currently is not used by default playbooks,
however you can use this value in your custom playbooks.
- ``client_key`` - private key file for client-side SSL authentication.
Takes value of :oslo.config:option:`ansible.image_store_keyfile` option if defined.
Takes value of ``[ansible]image_store_keyfile`` option if defined.
Currently is not used by default playbooks,
however you can use this value in your custom playbooks.

View file

@ -0,0 +1,323 @@
===============
iBMC driver
===============
Overview
========
.. warning::
The ``ibmc`` driver has been deprecated and is anticipated to be removed
from Ironic at some point during or after the 2024.2 development cycle.
The anticipated forward management path is to migrate to the ``redfish``
hardware type.
The ``ibmc`` driver is targeted for Huawei V5 series rack server such as
2288H V5, CH121 V5. The iBMC hardware type enables the user to take advantage
of features of `Huawei iBMC`_ to control Huawei server.
The ``ibmc`` hardware type supports the following Ironic interfaces:
* Management Interface: Boot device management
* Power Interface: Power management
* `RAID Interface`_: RAID controller and disk management
* `Vendor Interface`_: ibmc passthru interfaces
Prerequisites
=============
The `HUAWEI iBMC Client library`_ should be installed on the ironic conductor
node(s).
For example, it can be installed with ``pip``::
sudo pip install python-ibmcclient
Enabling the iBMC driver
============================
#. Add ``ibmc`` to the list of ``enabled_hardware_types``,
``enabled_power_interfaces``, ``enabled_vendor_interfaces``
and ``enabled_management_interfaces`` in ``/etc/ironic/ironic.conf``. For example::
[DEFAULT]
...
enabled_hardware_types = ibmc
enabled_power_interfaces = ibmc
enabled_management_interfaces = ibmc
enabled_raid_interfaces = ibmc
enabled_vendor_interfaces = ibmc
#. Restart the ironic conductor service::
sudo service ironic-conductor restart
# Or, for RDO:
sudo systemctl restart openstack-ironic-conductor
Registering a node with the iBMC driver
===========================================
Nodes configured to use the driver should have the ``driver`` property
set to ``ibmc``.
The following properties are specified in the node's ``driver_info``
field:
- ``ibmc_address``:
The URL address to the ibmc controller. It must
include the authority portion of the URL, and can
optionally include the scheme. If the scheme is
missing, https is assumed.
For example: https://ibmc.example.com. This is required.
- ``ibmc_username``:
User account with admin/server-profile access
privilege. This is required.
- ``ibmc_password``:
User account password. This is required.
- ``ibmc_verify_ca``:
If ibmc_address has the **https** scheme, the
driver will use a secure (TLS_) connection when
talking to the ibmc controller. By default
(if this is set to True), the driver will try to
verify the host certificates. This can be set to
the path of a certificate file or directory with
trusted certificates that the driver will use for
verification. To disable verifying TLS_, set this
to False. This is optional.
The ``baremetal node create`` command can be used to enroll
a node with the ``ibmc`` driver. For example:
.. code-block:: bash
baremetal node create --driver ibmc
--driver-info ibmc_address=https://example.com \
--driver-info ibmc_username=admin \
--driver-info ibmc_password=password
For more information about enrolling nodes see :ref:`enrollment`
in the install guide.
RAID Interface
==============
Currently, only RAID controller which supports OOB management can be managed.
See :doc:`/admin/raid` for more information on Ironic RAID support.
The following properties are supported by the iBMC raid interface
implementation, ``ibmc``:
Mandatory properties
--------------------
* ``size_gb``: Size in gigabytes (integer) for the logical disk. Use ``MAX`` as
``size_gb`` if this logical disk is supposed to use the rest of the space
available.
* ``raid_level``: RAID level for the logical disk. Valid values are
``JBOD``, ``0``, ``1``, ``5``, ``6``, ``1+0``, ``5+0`` and ``6+0``. And it
is possible that some RAID controllers can only support a subset RAID
levels.
.. NOTE::
RAID level ``2`` is not supported by ``iBMC`` driver.
Optional properties
-------------------
* ``is_root_volume``: Optional. Specifies whether this disk is a root volume.
By default, this is ``False``.
* ``volume_name``: Optional. Name of the volume to be created. If this is not
specified, it will be N/A.
Backing physical disk hints
---------------------------
See :doc:`/admin/raid` for more information on backing disk hints.
These are machine-independent properties. The hints are specified for each
logical disk to help Ironic find the desired disks for RAID configuration.
* ``share_physical_disks``
* ``disk_type``
* ``interface_type``
* ``number_of_physical_disks``
Backing physical disks
----------------------
These are HUAWEI RAID controller dependent properties:
* ``controller``: Optional. Supported values are: RAID storage id,
RAID storage name or RAID controller name. If a bare metal server have more
than one controller, this is mandatory. Typical values would look like:
* RAID Storage Id: ``RAIDStorage0``
* RAID Storage Name: ``RAIDStorage0``
* RAID Controller Name: ``RAID Card1 Controller``.
* ``physical_disks``: Optional. Supported values are: disk-id, disk-name or
disk serial number. Typical values for hdd disk would look like:
* Disk Id: ``HDDPlaneDisk0``
* Disk Name: ``Disk0``.
* Disk SerialNumber: ``38DGK77LF77D``
Delete RAID configuration
-------------------------
For ``delete_configuration`` step, ``ibmc`` will do:
* delete all logical disks
* delete all hot-spare disks
Logical disks creation priority
-------------------------------
Logical Disks creation priority based on three properties:
* ``share_physical_disks``
* ``physical_disks``
* ``size_gb``
The logical disks creation priority strictly follow the table below, if
multiple logical disks have the same priority, then they will be created with
the same order in ``logical_disks`` array.
==================== ========================== =========
Share physical disks Specified Physical Disks Size
==================== ========================== =========
no yes int|max
no no int
yes yes int
yes yes max
yes no int
yes no max
no no max
==================== ========================== =========
Physical disks choice strategy
------------------------------
.. note::
physical-disk-group: a group of physical disks which have been used by some
logical-disks with same RAID level.
* If no ``physical_disks`` are specified, the "waste least" strategy will be
used to choose the physical disks.
* waste least disk capacity: when using disks with different capacity, it
will cause a waste of disk capacity. This is to avoid with highest
priority.
* using least total disk capacity: for example, we can create 400G RAID 5
with both 5 100G-disks and 3 200G-disks. 5 100G disks is a better
strategy because it uses a 500G capacity totally. While 3 200G-disks
are 600G totally.
* using least disk count: finally, if waste capacity and total disk
capacity are both the same (it rarely happens?), we will choose the one
with the minimum number of disks.
* when ``share_physical_disks`` option is present, ``ibmc`` driver will
create logical disk upon existing physical-disk-group list first. Only
when no existing physical-disk-group matches, then it chooses unused
physical disks with same strategy described above. When multiple exists
physical-disk-groups matches, it will use "waste least" strategy too,
the bigger capacity left the better. For example, to create a logical disk
shown below on a ``ibmc`` server which has two RAID5 logical disks already.
And the shareable capacity of this two logical-disks are 500G and 300G,
then ``ibmc`` driver will choose the second one.
.. code-block:: json
{
"logical_disks": [
{
"controller": "RAID Card1 Controller",
"raid_level": "5",
"size_gb": 100,
"share_physical_disks": true
}
]
}
And the ``ibmc`` server has two RAID5 logical disks already.
* When ``size_gb`` is set to ``MAX``, ``ibmc`` driver will auto work through
all possible cases and choose the "best" solution which has the biggest
capacity and use least capacity. For example: to create a RAID 5+0 logical
disk with MAX size in a server has 9 200G-disks, it will finally choose
"8 disks + span-number 2" but not "9 disks + span-number 3". Although they
both have 1200G capacity totally, but the former uses only 8 disks and the
latter uses 9 disks. If you want to choose the latter solution, you can
specified the disk count to use by adding ``number_of_physical_disks``
option.
.. code-block:: json
{
"logical_disks": [
{
"controller": "RAID Card1 Controller",
"raid_level": "5+0",
"size_gb": "MAX"
}
]
}
Examples
--------
In a typical scenario we may want to create:
* RAID 5, 500G, root OS volume with 3 disks
* RAID 5, rest available space, data volume with rest disks
.. code-block:: json
{
"logical_disks": [
{
"volume_name": "os_volume",
"controller": "RAID Card1 Controller",
"is_root_volume": "True",
"physical_disks": [
"Disk0",
"Disk1",
"Disk2"
],
"raid_level": "5",
"size_gb": "500"
},
{
"volume_name": "data_volume",
"controller": "RAID Card1 Controller",
"raid_level": "5",
"size_gb": "MAX"
}
]
}
Vendor Interface
=========================================
The ``ibmc`` hardware type provides vendor passthru interfaces shown below:
======================== ============ ======================================
Method Name HTTP Method Description
======================== ============ ======================================
boot_up_seq GET Query boot up sequence
get_raid_controller_list GET Query RAID controller summary info
======================== ============ ======================================
.. _Huawei iBMC: https://e.huawei.com/en/products/computing/kunpeng/accessories/ibmc
.. _TLS: https://en.wikipedia.org/wiki/Transport_Layer_Security
.. _HUAWEI iBMC Client library: https://pypi.org/project/python-ibmcclient/

View file

@ -5,12 +5,17 @@ iDRAC driver
Overview
========
.. warning::
The ``-wsman`` driver interfaces have been deprecated and are anticipated
to be removed from Ironic at some point during or after the 2024.2
development cycle. The anticipated forward management path is to migrate
to the ``-redfish`` driver interfaces or the ``redfish`` hardware type.
The integrated Dell Remote Access Controller (iDRAC_) is an out-of-band
management platform on Dell EMC servers, and is supported directly by
the ``idrac`` hardware type. This driver utilizes the Distributed
Management Task Force (DMTF) Redfish protocol to perform all of it's
functions. In older versions of Ironic, this driver leveraged
Web Services for Management (WSMAN) protocol.
the ``idrac`` hardware type. This driver uses the Dell Web Services for
Management (WSMAN) protocol and the standard Distributed Management Task
Force (DMTF) Redfish protocol to perform all of its functions.
iDRAC_ hardware is also supported by the generic ``ipmi`` and ``redfish``
hardware types, though with smaller feature sets.
@ -26,52 +31,62 @@ Key features of the Dell iDRAC driver include:
Ironic Features
---------------
The ``idrac`` hardware type extends the ``redfish`` hardware type
and supports the following Ironic interfaces:
The ``idrac`` hardware type supports the following Ironic interfaces:
* `BIOS Interface`_: BIOS management
* `Inspect Interface`_: Hardware inspection
* `Management Interface`_: Boot device and firmware management
* Power Interface: Power management
* `RAID Interface`_: RAID controller and disk management
* `Vendor Interface`_: eject virtual media (Redfish)
* `Vendor Interface`_: BIOS management (WSMAN) and eject virtual media
(Redfish)
Prerequisites
-------------
The ``idrac`` hardware type requires the ``sushy`` library and the vendor extensions
The ``idrac`` hardware type requires the ``python-dracclient`` library
to be installed on the ironic conductor node(s) if an Ironic node is
configured to use an ``idrac-wsman`` interface implementation, for example::
sudo pip install 'python-dracclient>=3.1.0'
Additionally, the ``idrac`` hardware type requires the ``sushy`` library
to be installed on the ironic conductor node(s) if an Ironic node is
configured to use an ``idrac-redfish`` interface implementation, for example::
sudo pip install 'sushy>=5.6.0'
sudo pip install 'python-dracclient>=3.1.0' 'sushy>=2.0.0'
Enabling
--------
The iDRAC driver supports Redfish for the bios, inspect, management, power,
and raid interfaces.
The iDRAC driver supports WSMAN for the bios, inspect, management, power,
raid, and vendor interfaces. In addition, it supports Redfish for
the bios, inspect, management, power, and raid interfaces. The iDRAC driver
allows you to mix and match WSMAN and Redfish interfaces.
The ``idrac-redfish`` implementation must be enabled
The ``idrac-wsman`` implementation must be enabled to use WSMAN for
an interface. The ``idrac-redfish`` implementation must be enabled
to use Redfish for an interface.
To enable the ``idrac`` hardware type, add the following to your
``/etc/ironic/ironic.conf``:
To enable the ``idrac`` hardware type with the minimum interfaces,
all using WSMAN, add the following to your ``/etc/ironic/ironic.conf``:
.. code-block:: ini
[DEFAULT]
enabled_hardware_types=idrac
enabled_management_interfaces=idrac-redfish
enabled_power_interfaces=redfish
enabled_management_interfaces=idrac-wsman
enabled_power_interfaces=idrac-wsman
To enable all optional features (BIOS, inspection, RAID, and vendor passthru),
use the following configuration:
To enable all optional features (BIOS, inspection, RAID, and vendor passthru)
using Redfish where it is supported and WSMAN where not, use the
following configuration:
.. code-block:: ini
[DEFAULT]
enabled_hardware_types=idrac
enabled_bios_interfaces=redfish
enabled_bios_interfaces=idrac-redfish
enabled_firmware_interfaces=redfish
enabled_inspect_interfaces=idrac-redfish
enabled_management_interfaces=idrac-redfish
@ -85,31 +100,43 @@ order:
================ ===================================================
Interface Supported Implementations
================ ===================================================
``bios`` ``idrac-redfish``, ``no-bios``
``boot`` ``ipxe``, ``pxe``, ``http-ipxe``, ``http``,
``redfish-https``, ``idrac-redfish-virtual-media``
``bios`` ``idrac-wsman``, ``idrac-redfish``, ``no-bios``
``boot`` ``ipxe``, ``pxe``, ``idrac-redfish-virtual-media``
``console`` ``no-console``
``deploy`` ``direct``, ``ansible``, ``ramdisk``
``firmware`` ``redfish``, ``no-firmware``
``inspect`` ``idrac-redfish``,
``agent``, ``no-inspect``
``management`` ``idrac-redfish``
``inspect`` ``idrac-wsman``, ``idrac``, ``idrac-redfish``,
``inspector``, ``no-inspect``
``management`` ``idrac-wsman``, ``idrac``, ``idrac-redfish``
``network`` ``flat``, ``neutron``, ``noop``
``power`` ``redfish``, ``idrac-redfish``
``raid`` ``idrac-redfish``, ``no-raid``
``power`` ``idrac-wsman``, ``idrac``, ``idrac-redfish``
``raid`` ``idrac-wsman``, ``idrac``, ``idrac-redfish``, ``no-raid``
``rescue`` ``no-rescue``, ``agent``
``storage`` ``noop``, ``cinder``, ``external``
``vendor`` ``redfish``, ``idrac-redfish``,
``vendor`` ``idrac-wsman``, ``idrac``, ``idrac-redfish``,
``no-vendor``
================ ===================================================
.. NOTE::
``idrac`` is the legacy name of the WSMAN interface. It has been
deprecated in favor of ``idrac-wsman`` and may be removed in a
future release.
Protocol-specific Properties
----------------------------
The Redfish protocols require different properties to be specified
The WSMAN and Redfish protocols require different properties to be specified
in the Ironic node's ``driver_info`` field to communicate with the bare
metal system's iDRAC.
The WSMAN protocol requires the following properties:
* ``drac_username``: The WSMAN user name to use when communicating
with the iDRAC. Usually ``root``.
* ``drac_password``: The password for the WSMAN user to use when
communicating with the iDRAC.
* ``drac_address``: The IP address of the iDRAC.
The Redfish protocol requires the following properties:
* ``redfish_username``: The Redfish user name to use when
@ -124,9 +151,25 @@ The Redfish protocol requires the following properties:
For other Redfish protocol parameters see :doc:`/admin/drivers/redfish`.
If using only interfaces which use WSMAN (``idrac-wsman``), then only
the WSMAN properties must be supplied. If using only interfaces which
use Redfish (``idrac-redfish``), then only the Redfish properties must be
supplied. If using a mix of interfaces, where some use WSMAN and others
use Redfish, both the WSMAN and Redfish properties must be supplied.
Enrolling
---------
The following command enrolls a bare metal node with the ``idrac``
hardware type using WSMAN for all interfaces:
.. code-block:: bash
baremetal node create --driver idrac \
--driver-info drac_username=user \
--driver-info drac_password=pa$$w0rd \
--driver-info drac_address=drac.host
The following command enrolls a bare metal node with the ``idrac``
hardware type using Redfish for all interfaces:
@ -137,12 +180,35 @@ hardware type using Redfish for all interfaces:
--driver-info redfish_password=pa$$w0rd \
--driver-info redfish_address=drac.host \
--driver-info redfish_system_id=/redfish/v1/Systems/System.Embedded.1 \
--bios-interface redfish \
--bios-interface idrac-redfish \
--inspect-interface idrac-redfish \
--management-interface idrac-redfish \
--power-interface redfish \
--power-interface idrac-redfish \
--raid-interface idrac-redfish \
--vendor-interface redfish
--vendor-interface idrac-redfish
The following command enrolls a bare metal node with the ``idrac``
hardware type assuming a mix of Redfish and WSMAN interfaces are used:
.. code-block:: bash
baremetal node create --driver idrac \
--driver-info drac_username=user \
--driver-info drac_password=pa$$w0rd
--driver-info drac_address=drac.host \
--driver-info redfish_username=user \
--driver-info redfish_password=pa$$w0rd \
--driver-info redfish_address=drac.host \
--driver-info redfish_system_id=/redfish/v1/Systems/System.Embedded.1 \
--bios-interface idrac-redfish \
--inspect-interface idrac-redfish \
--management-interface idrac-redfish \
--power-interface idrac-redfish
.. NOTE::
If using WSMAN for the management interface, then WSMAN must be used
for the power interface. The same applies to Redfish. It is currently not
possible to use Redfish for one and WSMAN for the other.
BIOS Interface
==============
@ -186,7 +252,7 @@ Inspect Interface
The Dell iDRAC out-of-band inspection process catalogs all the same
attributes of the server as the IPMI driver. Unlike IPMI, it does this
without requiring the system to be rebooted, or even to be powered on.
Inspection is performed using the Redfish protocol directly
Inspection is performed using the Dell WSMAN or Redfish protocol directly
without affecting the operation of the system being inspected.
The inspection discovers the following properties:
@ -201,6 +267,8 @@ Extra capabilities:
* ``pci_gpu_devices``: number of GPU devices connected to the bare metal.
It also creates baremetal ports for each NIC port detected in the system.
The ``idrac-wsman`` inspect interface discovers which NIC ports are
configured to PXE boot and sets ``pxe_enabled`` to ``True`` on those ports.
The ``idrac-redfish`` inspect interface does not currently set ``pxe_enabled``
on the ports. The user should ensure that ``pxe_enabled`` is set correctly on
the ports following inspection with the ``idrac-redfish`` inspect interface.
@ -368,7 +436,7 @@ Storage setup
To start using these steps, configure the storage location. The settings can be
found in the ``[molds]`` section. Configure the storage type from the
:oslo.config:option:`molds.storage` setting. Currently, ``swift``, which is enabled by default,
``[molds]storage`` setting. Currently, ``swift``, which is enabled by default,
and ``http`` are supported.
In the setup input parameters, the complete HTTP URL is used. This requires
@ -400,7 +468,7 @@ To use HTTP server with configuration molds,
#. Enable HTTP PUT support.
#. Create the directory to be used for the configuration mold storage.
#. Configure read/write access for HTTP Basic access authentication and provide
user credentials in :oslo.config:option:`molds.user` and :oslo.config:option:`molds.password` fields.
user credentials in ``[molds]user`` and ``[molds]password`` fields.
The HTTP web server does not support multitenancy and is intended to be used in
a stand-alone Ironic, or single-tenant OpenStack environment.
@ -419,7 +487,7 @@ Compared to ``redfish`` RAID interface, using ``idrac-redfish`` adds:
* Converting non-RAID disks to RAID mode if there are any,
* Clearing foreign configuration, if any, after deleting virtual disks.
The following properties are supported by the Redfish RAID
The following properties are supported by the iDRAC WSMAN and Redfish RAID
interface implementation:
.. NOTE::
@ -565,6 +633,223 @@ Or using ``sushy`` with Redfish:
Vendor Interface
================
idrac-wsman
-----------
Dell iDRAC BIOS management is available through the Ironic WSMAN vendor
passthru interface.
======================== ============ ======================================
Method Name HTTP Method Description
======================== ============ ======================================
``abandon_bios_config`` ``DELETE`` Abandon a BIOS configuration job.
``commit_bios_config`` ``POST`` Commit a BIOS configuration job
submitted through ``set_bios_config``.
Required argument: ``reboot`` -
indicates whether a reboot job
should be automatically created
with the config job. Returns a
dictionary containing the ``job_id``
key with the ID of the newly created
config job, and the
``reboot_required`` key indicating
whether the node needs to be rebooted
to execute the config job.
``get_bios_config`` ``GET`` Returns a dictionary containing the
node's BIOS settings.
``list_unfinished_jobs`` ``GET`` Returns a dictionary containing
the key ``unfinished_jobs``; its value
is a list of dictionaries. Each
dictionary represents an unfinished
config job object.
``set_bios_config`` ``POST`` Change the BIOS configuration on
a node. Required argument: a
dictionary of {``AttributeName``:
``NewValue``}. Returns a dictionary
containing the ``is_commit_required``
key indicating whether
``commit_bios_config`` needs to be
called to apply the changes and the
``is_reboot_required`` value
indicating whether the server must
also be rebooted. Possible values are
``true`` and ``false``.
======================== ============ ======================================
Examples
^^^^^^^^
Get BIOS Config
~~~~~~~~~~~~~~~
.. code-block:: bash
baremetal node passthru call --http-method GET <node> get_bios_config
Snippet of output showing virtualization enabled:
.. code-block:: json
{"ProcVirtualization": {
"current_value": "Enabled",
"instance_id": "BIOS.Setup.1-1:ProcVirtualization",
"name": "ProcVirtualization",
"pending_value": null,
"possible_values": [
"Enabled",
"Disabled"],
"read_only": false }}
There are a number of items to note from the above snippet:
* ``name``: this is the name to use in a call to ``set_bios_config``.
* ``current_value``: the current state of the setting.
* ``pending_value``: if the value has been set, but not yet committed,
the new value is shown here. The change can either be committed or
abandoned.
* ``possible_values``: shows a list of valid values which can be used
in a call to ``set_bios_config``.
* ``read_only``: indicates if the value is capable of being changed.
Set BIOS Config
~~~~~~~~~~~~~~~
.. code-block:: bash
baremetal node passthru call <node> set_bios_config --arg "name=value"
Walkthrough of perfoming a BIOS configuration change:
The following section demonstrates how to change BIOS configuration settings,
detect that a commit and reboot are required, and act on them accordingly. The
two properties that are being changed are:
* Enable virtualization technology of the processor
* Globally enable SR-IOV
.. code-block:: bash
baremetal node passthru call <node> set_bios_config \
--arg "ProcVirtualization=Enabled" \
--arg "SriovGlobalEnable=Enabled"
This returns a dictionary indicating what actions are required next:
.. code-block:: json
{
"is_reboot_required": true,
"is_commit_required": true
}
Commit BIOS Changes
~~~~~~~~~~~~~~~~~~~
The next step is to commit the pending change to the BIOS. Note that in this
example, the ``reboot`` argument is set to ``true``. The response indicates
that a reboot is no longer required as it has been scheduled automatically
by the ``commit_bios_config`` call. If the reboot argument is not supplied,
the job is still created, however it remains in the ``scheduled`` state
until a reboot is performed. The reboot can be initiated through the
Ironic power API.
.. code-block:: bash
baremetal node passthru call <node> commit_bios_config \
--arg "reboot=true"
.. code-block:: json
{
"job_id": "JID_499377293428",
"reboot_required": false
}
The state of any executing job can be queried:
.. code-block:: bash
baremetal node passthru call --http-method GET <node> list_unfinished_jobs
.. code-block:: json
{"unfinished_jobs":
[{"status": "Scheduled",
"name": "ConfigBIOS:BIOS.Setup.1-1",
"until_time": "TIME_NA",
"start_time": "TIME_NOW",
"message": "Task successfully scheduled.",
"percent_complete": "0",
"id": "JID_499377293428"}]}
Abandon BIOS Changes
~~~~~~~~~~~~~~~~~~~~
Instead of committing, a pending change can be abandoned:
.. code-block:: bash
baremetal node passthru call --http-method DELETE <node> abandon_bios_config
The abandon command does not provide a response body.
Change Boot Mode
^^^^^^^^^^^^^^^^
The boot mode of the iDRAC can be changed to:
* BIOS - Also called legacy or traditional boot mode. The BIOS initializes the
systems processors, memory, bus controllers, and I/O devices. After
initialization is complete, the BIOS passes control to operating system (OS)
software. The OS loader uses basic services provided by the system BIOS to
locate and load OS modules into system memory. After booting the system, the
BIOS and embedded management controllers execute system management
algorithms, which monitor and optimize the condition of the underlying
hardware. BIOS configuration settings enable fine-tuning of the
performance, power management, and reliability features of the system.
* UEFI - The Unified Extensible Firmware Interface does not change the
traditional purposes of the system BIOS. To a large extent, a UEFI-compliant
BIOS performs the same initialization, boot, configuration, and management
tasks as a traditional BIOS. However, UEFI does change the interfaces and
data structures the BIOS uses to interact with I/O device firmware and
operating system software. The primary intent of UEFI is to eliminate
shortcomings in the traditional BIOS environment, enabling system firmware to
continue scaling with industry trends.
The UEFI boot mode offers:
* Improved partitioning scheme for boot media
* Support for media larger than 2 TB
* Redundant partition tables
* Flexible handoff from BIOS to OS
* Consolidated firmware user interface
* Enhanced resource allocation for boot device firmware
The boot mode can be changed via the WSMAN vendor passthru interface as
follows:
.. code-block:: bash
baremetal node passthru call <node> set_bios_config \
--arg "BootMode=Uefi"
baremetal node passthru call <node> commit_bios_config \
--arg "reboot=true"
.. code-block:: bash
baremetal node passthru call <node> set_bios_config \
--arg "BootMode=Bios"
baremetal node passthru call <node> commit_bios_config \
--arg "reboot=true"
idrac-redfish
-------------
@ -590,7 +875,7 @@ Nodes go into maintenance mode
After some period of time, nodes managed by the ``idrac`` hardware type may go
into maintenance mode in Ironic. This issue can be worked around by changing
the Ironic power state poll interval to 70 seconds. See
:oslo.config:option:`conductor.sync_power_state_interval` in ``/etc/ironic/ironic.conf``.
``[conductor]sync_power_state_interval`` in ``/etc/ironic/ironic.conf``.
PXE reset with "factory_reset" BIOS clean step
----------------------------------------------
@ -611,6 +896,27 @@ settings.
.. _Ironic_RAID: https://docs.openstack.org/ironic/latest/admin/raid.html
.. _iDRAC: https://www.dell.com/idracmanuals
WSMAN vendor passthru timeout
-----------------------------
When iDRAC is not ready and executing WSMAN vendor passthru commands, they take
more time as waiting for iDRAC to become ready again and then time out,
for example:
.. code-block:: bash
baremetal node passthru call --http-method GET \
aed58dca-1b25-409a-a32f-3a817d59e1e0 list_unfinished_jobs
Timed out waiting for a reply to message ID 547ce7995342418c99ef1ea4a0054572 (HTTP 500)
To avoid this need to increase timeout for messaging in ``/etc/ironic/ironic.conf``
and restart Ironic API service.
.. code-block:: ini
[DEFAULT]
rpc_response_timeout = 600
Timeout when powering off
-------------------------

View file

@ -1,10 +1,3 @@
.. meta::
:description: Configure Ironic iLO driver for HPE ProLiant server management. Support for iLO 4, iLO 5, virtual media, and HPE-specific hardware features.
:keywords: ilo driver, hpe proliant, hpe servers, ilo4, ilo5, virtual media, hpe management, proliant automation
:author: OpenStack Ironic Team
:robots: index, follow
:audience: system administrators, hardware engineers
.. _ilo:
==========
@ -15,8 +8,8 @@ Overview
========
iLO driver enables to take advantage of features of iLO management engine in
HPE ProLiant servers. The ``ilo`` hardware type is targeted for HPE ProLiant
Gen8 and Gen9 systems which have `iLO 4 management engine`_. The ``ilo``
hardware type supports ProLiant Gen10 systems which have
Gen8 and Gen9 systems which have `iLO 4 management engine`_. From **Pike**
release ``ilo`` hardware type supports ProLiant Gen10 systems which have
`iLO 5 management engine`_. iLO5 conforms to `Redfish`_ API and hence hardware
type ``redfish`` (see :doc:`redfish`) is also an option for this kind of
hardware but it lacks the iLO specific features.
@ -30,15 +23,7 @@ support in Ironic please check this `Gen10 wiki section`_.
.. warning::
Starting from Gen11 servers and above (iLO6 and above) use ``redfish``
(see :doc:`redfish`) hardware type for baremetal provisioning and
management. You can use the ``redfish`` hardware type for iLO5 hardware,
however RAID configuration is not available via Redfish until the iLO6
baseboard management controllers.
The Ironic community does not anticipate new features to be added to the
``ilo`` and ``ilo5`` hardware types as ``redfish`` is superseding
most vendor specific hardware types. These drivers are anticipated
to be available in Ironic as long as the ``proliantutils`` library
is maintained.
management.
Hardware type
=============
@ -101,7 +86,7 @@ The ``ilo`` hardware type supports following hardware interfaces:
* bios
Supports ``ilo`` and ``no-bios``. The default is ``ilo``.
They can be enabled by using the :oslo.config:option:`DEFAULT.enabled_bios_interfaces`
They can be enabled by using the ``[DEFAULT]enabled_bios_interfaces``
option in ``ironic.conf`` as given below:
.. code-block:: ini
@ -117,7 +102,7 @@ The ``ilo`` hardware type supports following hardware interfaces:
media to boot up the bare metal node. The ``ilo-pxe`` and ``ilo-ipxe``
interfaces use PXE and iPXE respectively for deployment(just like
:ref:`pxe-boot`). These interfaces do not require iLO Advanced license.
They can be enabled by using the :oslo.config:option:`DEFAULT.enabled_boot_interfaces`
They can be enabled by using the ``[DEFAULT]enabled_boot_interfaces``
option in ``ironic.conf`` as given below:
.. code-block:: ini
@ -128,7 +113,7 @@ The ``ilo`` hardware type supports following hardware interfaces:
* console
Supports ``ilo`` and ``no-console``. The default is ``ilo``.
They can be enabled by using the :oslo.config:option:`DEFAULT.enabled_console_interfaces`
They can be enabled by using the ``[DEFAULT]enabled_console_interfaces``
option in ``ironic.conf`` as given below:
.. code-block:: ini
@ -145,19 +130,23 @@ The ``ilo`` hardware type supports following hardware interfaces:
management engine.
* inspect
Supports ``ilo`` and ``agent``. The default is ``ilo``. They
can be enabled by using the :oslo.config:option:`DEFAULT.enabled_inspect_interfaces` option
Supports ``ilo`` and ``inspector``. The default is ``ilo``. They
can be enabled by using the ``[DEFAULT]enabled_inspect_interfaces`` option
in ``ironic.conf`` as given below:
.. code-block:: ini
[DEFAULT]
enabled_hardware_types = ilo
enabled_inspect_interfaces = ilo,agent
enabled_inspect_interfaces = ilo,inspector
.. note::
:ironic-inspector-doc:`Ironic Inspector <>`
needs to be configured to use ``inspector`` as the inspect interface.
* management
Supports only ``ilo``. It can be enabled by using the
:oslo.config:option:`DEFAULT.enabled_management_interfaces` option in ``ironic.conf`` as
``[DEFAULT]enabled_management_interfaces`` option in ``ironic.conf`` as
given below:
.. code-block:: ini
@ -168,7 +157,7 @@ The ``ilo`` hardware type supports following hardware interfaces:
* power
Supports only ``ilo``. It can be enabled by using the
:oslo.config:option:`DEFAULT.enabled_power_interfaces` option in ``ironic.conf`` as given
``[DEFAULT]enabled_power_interfaces`` option in ``ironic.conf`` as given
below:
.. code-block:: ini
@ -179,7 +168,7 @@ The ``ilo`` hardware type supports following hardware interfaces:
* raid
Supports ``agent`` and ``no-raid``. The default is ``no-raid``.
They can be enabled by using the :oslo.config:option:`DEFAULT.enabled_raid_interfaces`
They can be enabled by using the ``[DEFAULT]enabled_raid_interfaces``
option in ``ironic.conf`` as given below:
.. code-block:: ini
@ -190,7 +179,7 @@ The ``ilo`` hardware type supports following hardware interfaces:
* storage
Supports ``cinder`` and ``noop``. The default is ``noop``.
They can be enabled by using the :oslo.config:option:`DEFAULT.enabled_storage_interfaces`
They can be enabled by using the ``[DEFAULT]enabled_storage_interfaces``
option in ``ironic.conf`` as given below:
.. code-block:: ini
@ -207,7 +196,7 @@ The ``ilo`` hardware type supports following hardware interfaces:
* rescue
Supports ``agent`` and ``no-rescue``. The default is ``no-rescue``.
They can be enabled by using the :oslo.config:option:`DEFAULT.enabled_rescue_interfaces`
They can be enabled by using the ``[DEFAULT]enabled_rescue_interfaces``
option in ``ironic.conf`` as given below:
.. code-block:: ini
@ -219,7 +208,7 @@ The ``ilo`` hardware type supports following hardware interfaces:
* vendor
Supports ``ilo``, ``ilo-redfish`` and ``no-vendor``. The default is
``ilo``. They can be enabled by using the
:oslo.config:option:`DEFAULT.enabled_vendor_interfaces` option in ``ironic.conf`` as given
``[DEFAULT]enabled_vendor_interfaces`` option in ``ironic.conf`` as given
below:
.. code-block:: ini
@ -235,7 +224,7 @@ except for ``boot`` and ``raid`` interfaces. The details of ``boot`` and
* raid
Supports ``ilo5`` and ``no-raid``. The default is ``ilo5``.
They can be enabled by using the :oslo.config:option:`DEFAULT.enabled_raid_interfaces`
They can be enabled by using the ``[DEFAULT]enabled_raid_interfaces``
option in ``ironic.conf`` as given below:
.. code-block:: ini
@ -247,7 +236,7 @@ except for ``boot`` and ``raid`` interfaces. The details of ``boot`` and
* boot
Supports ``ilo-uefi-https`` apart from the other boot interfaces supported
by ``ilo`` hardware type.
This can be enabled by using the :oslo.config:option:`DEFAULT.enabled_boot_interfaces`
This can be enabled by using the ``[DEFAULT]enabled_boot_interfaces``
option in ``ironic.conf`` as given below:
.. code-block:: ini
@ -372,8 +361,8 @@ Node configuration
before the Xena release.
* The following parameters are mandatory in ``driver_info``
if ``ilo-inspect`` inspect interface is used and SNMPv3 inspection
(``SNMPv3 Authentication`` in `HPE iLO4 User Guide`_) is desired:
if ``ilo-inspect`` inspect inteface is used and SNMPv3 inspection
(`SNMPv3 Authentication` in `HPE iLO4 User Guide`_) is desired:
* ``snmp_auth_user`` : The SNMPv3 user.
@ -902,7 +891,7 @@ The hardware type ``ilo`` supports hardware inspection.
an error. This feature is available in proliantutils release
version >= 2.2.0.
* The iLO must be updated with SNMPv3 authentication details.
Please refer to the section ``SNMPv3 Authentication`` in `HPE iLO4 User Guide`_
Pleae refer to the section `SNMPv3 Authentication` in `HPE iLO4 User Guide`_
for setting up authentication details on iLO.
The following parameters are mandatory to be given in driver_info
for SNMPv3 inspection:
@ -1535,9 +1524,9 @@ An example of a manual clean step with ``create_csr`` as the only clean step cou
}
}]
The :oslo.config:option:`ilo.cert_path` option in ``ironic.conf`` is used as the directory path for
The ``[ilo]cert_path`` option in ``ironic.conf`` is used as the directory path for
creating the CSR, which defaults to ``/var/lib/ironic/ilo``. The CSR is created in the directory location
given in :oslo.config:option:`ilo.cert_path` in ``node_uuid`` directory as <node_uuid>.csr.
given in ``[ilo]cert_path`` in ``node_uuid`` directory as <node_uuid>.csr.
Add HTTPS Certificate as manual clean step
@ -1559,7 +1548,7 @@ An example of a manual clean step with ``add_https_certificate`` as the only cle
Argument ``cert_file`` is mandatory. The ``cert_file`` takes the path or url of the certificate file.
The url schemes supported are: ``file``, ``http`` and ``https``.
The CSR generated in step ``create_csr`` needs to be signed by a valid CA and the resultant HTTPS certificate should
be provided in ``cert_file``. It copies the ``cert_file`` to :oslo.config:option:`ilo.cert_path` under ``node.uuid`` as <node_uuid>.crt
be provided in ``cert_file``. It copies the ``cert_file`` to ``[ilo]cert_path`` under ``node.uuid`` as <node_uuid>.crt
before adding it to iLO.
RAID Support
@ -1586,7 +1575,7 @@ configuration of RAID:
DIB support for Proliant Hardware Manager
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Install `ironic-python-agent-builder`_
Install ``ironic-python-agent-builder`` following the guide [1]_
To create an agent ramdisk with ``Proliant Hardware Manager``,
use the ``proliant-tools`` element in DIB::
@ -1618,7 +1607,7 @@ This clean step is performed as part of automated cleaning and it is disabled
by default. See :ref:`InbandvsOutOfBandCleaning` for more information on
enabling/disabling a clean step.
Install `ironic-python-agent-builder`_.
Install ``ironic-python-agent-builder`` following the guide [1]_
To create an agent ramdisk with ``Proliant Hardware Manager``, use the
``proliant-tools`` element in DIB::
@ -1818,7 +1807,7 @@ refer to `HPE Integrated Lights-Out REST API Documentation <https://hewlettpacka
Allowed values are ``Enabled``, ``Disabled``.
- ``WorkloadProfile``:
Change the Workload Profile to accommodate your desired workload.
Change the Workload Profile to accomodate your desired workload.
Allowed values are ``GeneralPowerEfficientCompute``,
``GeneralPeakFrequencyCompute``, ``GeneralThroughputCompute``,
``Virtualization-PowerEfficient``, ``Virtualization-MaxPerformance``,
@ -1838,7 +1827,7 @@ the node's ``driver_info``. To update SSL certificates into iLO,
refer to `HPE Integrated Lights-Out Security Technology Brief <http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=c04530504>`_.
Use iLO hostname or IP address as a 'Common Name (CN)' while
generating Certificate Signing Request (CSR). Use the same value as
``ilo_address`` while enrolling node to Bare Metal service to avoid SSL
`ilo_address` while enrolling node to Bare Metal service to avoid SSL
certificate validation errors related to hostname mismatch.
Rescue mode support
@ -1884,7 +1873,7 @@ soft power operations on a server:
[--power-timeout <power-timeout>] <node>
.. note::
The configuration :oslo.config:option:`conductor.soft_power_off_timeout` is used as a
The configuration ``[conductor]soft_power_off_timeout`` is used as a
default timeout value when no timeout is provided while invoking
hard or soft power operations.
@ -2054,7 +2043,7 @@ Events subscription
^^^^^^^^^^^^^^^^^^^
Events subscription is supported by ``ilo`` and ``ilo5`` hardware types with
``ilo`` vendor interface for Gen10 and Gen10 Plus servers. See
:doc:`redfish/passthru` for more information.
:ref:`node-vendor-passthru-methods` for more information.
Anaconda based deployment
^^^^^^^^^^^^^^^^^^^^^^^^^
@ -2075,5 +2064,5 @@ more information.
.. _`Guidelines for SPP ISO`: https://h17007.www1.hpe.com/us/en/enterprise/servers/products/service_pack/spp
.. _`SUM`: https://h17007.www1.hpe.com/us/en/enterprise/servers/products/service_pack/hpsum/index.aspx
.. _`SUM User Guide`: https://h20565.www2.hpe.com/hpsc/doc/public/display?docId=c05210448
.. _`ironic-python-agent-builder`: https://docs.openstack.org/ironic-python-agent-builder/latest/install/index.html
.. [1] `ironic-python-agent-builder`: https://docs.openstack.org/ironic-python-agent-builder/latest/install/index.html
.. _`HPE Integrated Lights-Out Security Technology Brief`: http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=c04530504

View file

@ -94,8 +94,8 @@ A node with Intel SST-PP can be configured to use it via
* ``intel_speedselect_config``:
Hexadecimal code of Intel SST-PP configuration. Accepted values are
'0x00', '0x01', '0x02'. These values correspond to
``Intel SST-PP Config Base``, ``Intel SST-PP Config 1``,
``Intel SST-PP Config 2`` respectively. The input value must be a string.
`Intel SST-PP Config Base`, `Intel SST-PP Config 1`,
`Intel SST-PP Config 2` respectively. The input value must be a string.
* ``socket_count``:
Number of sockets in the node. The input value must be a positive

View file

@ -58,14 +58,14 @@ Steps to enable proxies
sensitive information. Refer to your proxy server's documentation to
complete this step.
#. Set :oslo.config:option:`glance.swift_temp_url_cache_enabled` in the ironic conductor config
#. Set ``[glance]swift_temp_url_cache_enabled`` in the ironic conductor config
file to ``True``. The conductor will reuse the cached swift temporary URLs
instead of generating new ones each time an image is requested, so that the
proxy server does not create new cache entries for the same image, based on
the query part of the URL (as it contains some query parameters that change
each time it is regenerated).
#. Set :oslo.config:option:`glance.swift_temp_url_expected_download_start_delay` option in the
#. Set ``[glance]swift_temp_url_expected_download_start_delay`` option in the
ironic conductor config file to the value appropriate for your hardware.
This is the delay (in seconds) from the time of the deploy request (when
the swift temporary URL is generated) to when the URL is used for the image
@ -74,15 +74,15 @@ Steps to enable proxies
temporary URL duration is large enough to let the image download begin. Also
if temporary URL caching is enabled, this will determine if a cached entry
will still be valid when the download starts. It is used only if
:oslo.config:option:`glance.swift_temp_url_cache_enabled` is ``True``.
``[glance]swift_temp_url_cache_enabled`` is ``True``.
#. Increase :oslo.config:option:`glance.swift_temp_url_duration` option in the ironic conductor
#. Increase ``[glance]swift_temp_url_duration`` option in the ironic conductor
config file, as only non-expired links to images will be returned from the
swift temporary URLs cache. This means that if
``swift_temp_url_duration=1200`` then after 20 minutes a new image will be
cached by the proxy server as the query in its URL will change. The value of
this option must be greater than or equal to
:oslo.config:option:`glance.swift_temp_url_expected_download_start_delay`.
``[glance]swift_temp_url_expected_download_start_delay``.
#. Add one or more of ``image_http_proxy``, ``image_https_proxy``,
``image_no_proxy`` to driver_info properties in each node that will use the

View file

@ -1,10 +1,3 @@
.. meta::
:description: Configure Ironic IPMI driver using ipmitool for legacy server management. Power control, console access, and hardware monitoring via IPMI protocol.
:keywords: ipmi driver, ipmitool, legacy servers, power management, console access, hardware monitoring, baseboard management controller
:author: OpenStack Ironic Team
:robots: index, follow
:audience: system administrators, hardware engineers
===========
IPMI driver
===========
@ -89,28 +82,6 @@ with an IPMItool-based driver. For example::
--driver-info ipmi_username=<username> \
--driver-info ipmi_password=<password>
Changing The Default IPMI Credential Persistence Method
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- ``store_cred_in_env``: :oslo.config:option:`ipmi.store_cred_in_env`.
The ``store_cred_in_env`` configuration option allow users to switch
between file-based and environment variable persistence methods for
IPMI password.
For the temporary file option, long lived IPMI sessions, such as those for
console support, leave files with credentials on the conductor disk for the
duration of the session.
To switch to environment variable persistence, set the
``store_cred_in_env`` parameter to ``True`` in the configuration file:
.. code-block:: ini
[ipmi]
store_cred_in_env = True
Advanced configuration
======================
@ -231,10 +202,10 @@ a value that can be used from the list provided (from last to first):
.. code-block:: ini
[ipmi]
cipher_suite_versions = 1,2,3,6,7,8,11,12
cipher_suite_versions = ['1','2','3','6','7','8','11','12']
To find the suitable values for this configuration, you can check the field
``RMCP+ Cipher Suites`` after running an ``ipmitool`` command, e.g:
`RMCP+ Cipher Suites` after running an ``ipmitool`` command, e.g:
.. code-block:: console

View file

@ -4,18 +4,6 @@
iRMC driver
===========
.. warning::
**The iRMC driver is deprecated and will be removed in a future release.**
The Third Party CI for the iRMC driver stopped responding in 2019, and
attempts to contact the vendor have been unsuccessful. As a result, the
driver cannot be maintained and is being deprecated for removal.
Users of the ``irmc`` hardware type should begin planning migration to
alternative hardware types, ideally redfish. The use of ``ipmi`` as a
replacement to ``irmc`` is discouraged due to it being an ageing
management protocol which should be used with caution.
Overview
========
@ -58,16 +46,22 @@ hardware interfaces:
.. warning::
We deprecated the ``pxe`` boot interface when used with ``irmc``
hardware type. Support for this interface will be removed in the
future. Instead, use ``irmc-pxe``.
future. Instead, use ``irmc-pxe``. ``irmc-pxe`` boot interface
was introduced in Pike.
* console
Supports ``ipmitool-socat``, ``ipmitool-shellinabox``, and ``no-console``.
The default is ``ipmitool-socat``.
* inspect
Supports ``irmc``, ``agent``, and ``no-inspect``.
Supports ``irmc``, ``inspector``, and ``no-inspect``.
The default is ``irmc``.
.. note::
:ironic-inspector-doc:`Ironic Inspector <>`
needs to be present and configured to use ``inspector`` as the
inspect interface.
* management
Supports only ``irmc``.
@ -95,7 +89,7 @@ interfaces enabled for ``irmc`` hardware type.
enabled_boot_interfaces = irmc-virtual-media,irmc-pxe
enabled_console_interfaces = ipmitool-socat,ipmitool-shellinabox,no-console
enabled_deploy_interfaces = direct
enabled_inspect_interfaces = irmc,agent,no-inspect
enabled_inspect_interfaces = irmc,inspector,no-inspect
enabled_management_interfaces = irmc
enabled_network_interfaces = flat,neutron
enabled_power_interfaces = irmc
@ -183,7 +177,7 @@ Configuration via ``driver_info``
- string representing filesystem path to directory which contains
certification file: In this case, iRMC driver uses certification file
stored at specified directory. Ironic conductor must be able to access
that directory. For iRMC to recognize certification file, Ironic user
that directory. For iRMC to recongnize certification file, Ironic user
must run ``openssl rehash <path_to_dir>``.
- string representing filesystem path to certification file: In this case,

View file

@ -1,10 +1,3 @@
.. meta::
:description: Configure Ironic Redfish driver for modern server management. Support for Dell, HPE, Supermicro, and standards-based BMC control via Redfish API.
:keywords: redfish driver, BMC management, server automation, dell idrac, hpe ilo, supermicro, standards-based management, REST API
:author: OpenStack Ironic Team
:robots: index, follow
:audience: system administrators, hardware engineers
==============
Redfish driver
==============
@ -13,33 +6,7 @@ Overview
========
The ``redfish`` driver enables managing servers compliant with the
Redfish_ standard. While Redfish strives to provide a standard model of
interaction for baseboard management controllers, vendors often do things
slightly different or interpret the standard differently. Ironic attempts
to support a number of features while also navigating these issues, however
if you do encounter any issues, please do not hesitate to open a bug for the
Ironic maintainers.
Supported features include:
* Network, :ref:`virtual media <redfish-virtual-media>` and :ref:`HTTP(s)
<redfish-https-boot>` boot.
* Additional virtual media features:
* :ref:`Ramdisk deploy interface <redfish-virtual-media-ramdisk>`.
* :doc:`/admin/dhcp-less`.
* `Virtual media API
<https://docs.openstack.org/api-ref/baremetal/#attach-detach-virtual-media-nodes>`_.
* :ref:`Changing boot mode and secure boot status <redfish-boot-mode>`.
* :doc:`In-band </admin/inspection/index>` and `out-of-band inspection`_.
* Retrieving and changing :ref:`BIOS settings <redfish-bios-settings>`.
* Applying :doc:`firmware updates </admin/firmware-updates>`.
* Configuring :doc:`hardware RAID </admin/raid>`.
* :doc:`Hardware metrics <redfish/metrics>` and integration with
`ironic-prometheus-exporter
<https://docs.openstack.org/ironic-prometheus-exporter/latest/>`_.
* Event notifications configured via :doc:`redfish/passthru`.
Redfish_ protocol.
Prerequisites
=============
@ -66,7 +33,7 @@ Enabling the Redfish driver
enabled_boot_interfaces = ipxe,redfish-virtual-media,redfish-https
enabled_power_interfaces = ipmitool,redfish
enabled_management_interfaces = ipmitool,redfish
enabled_inspect_interfaces = agent,redfish
enabled_inspect_interfaces = inspector,redfish
#. Restart the ironic conductor service::
@ -84,49 +51,53 @@ set to ``redfish``.
The following properties are specified in the node's ``driver_info``
field:
``redfish_address``
The URL address to the Redfish controller. It must include the authority
portion of the URL, and can optionally include the scheme. If the scheme is
missing, https is assumed. For example: ``https://mgmt.vendor.com``. This
is required.
- ``redfish_address``: The URL address to the Redfish controller. It must
include the authority portion of the URL, and can
optionally include the scheme. If the scheme is
missing, https is assumed.
For example: https://mgmt.vendor.com. This is required.
``redfish_system_id``
The canonical path to the ComputerSystem resource that the driver will
interact with. It should include the root service, version and the unique
resource path to the ComputerSystem. This property is only required if
target BMC manages more than one ComputerSystem. Otherwise ironic will pick
the only available ComputerSystem automatically. For example:
``/redfish/v1/Systems/1``.
- ``redfish_system_id``: The canonical path to the ComputerSystem resource
that the driver will interact with. It should include
the root service, version and the unique resource
path to the ComputerSystem. This property is only
required if target BMC manages more than one
ComputerSystem. Otherwise ironic will pick the only
available ComputerSystem automatically. For
example: /redfish/v1/Systems/1.
``redfish_username``
User account with admin/server-profile access privilege. Although not
required, it is highly recommended.
- ``redfish_username``: User account with admin/server-profile access
privilege. Although not required, it is highly
recommended.
``redfish_password``
User account password. Although not required, it is highly recommended.
- ``redfish_password``: User account password. Although not required, it is
highly recommended.
``redfish_verify_ca``
If ``redfish_address`` has the ``https://`` scheme, the driver will use a
secure (TLS_) connection when talking to the Redfish controller. By default
(if this is not set or set to ``True``), the driver will try to verify the
host certificates. This can be set to the path of a certificate file or
directory with trusted certificates that the driver will use for
verification. To disable verifying TLS_, set this to ``False``. This is
optional.
- ``redfish_verify_ca``: If redfish_address has the **https** scheme, the
driver will use a secure (TLS_) connection when
talking to the Redfish controller. By default
(if this is not set or set to True), the driver
will try to verify the host certificates. This
can be set to the path of a certificate file or
directory with trusted certificates that the
driver will use for verification. To disable
verifying TLS_, set this to False. This is optional.
``redfish_auth_type``
Redfish HTTP client authentication method. Can be ``basic``, ``session`` or
``auto``. The ``auto`` mode first tries ``session`` and falls back to
``basic`` if session authentication is not supported by the Redfish BMC.
Default is set in ironic config as :oslo.config:option:`redfish.auth_type`.
Most operators should not need to leverage this setting. Session based
authentication should generally be used in most cases as it prevents
re-authentication every time a background task checks in with the BMC.
- ``redfish_auth_type``: Redfish HTTP client authentication method. Can be
"basic", "session" or "auto".
The "auto" mode first tries "session" and falls back
to "basic" if session authentication is not supported
by the Redfish BMC. Default is set in ironic config
as ``[redfish]auth_type``. Most operators should not
need to leverage this setting. Session based
authentication should generally be used in most
cases as it prevents re-authentication every time
a background task checks in with the BMC.
.. note::
The ``redfish_address``, ``redfish_username``, ``redfish_password``,
and ``redfish_verify_ca`` fields, if changed, will trigger a new session
to be established and cached with the BMC. The ``redfish_auth_type`` field
to be establsihed and cached with the BMC. The ``redfish_auth_type`` field
will only be used for the creation of a new cached session, or should
one be rejected by the BMC.
@ -144,8 +115,6 @@ a node with the ``redfish`` driver. For example:
For more information about enrolling nodes see :ref:`enrollment`
in the install guide.
.. _redfish-boot-mode:
Boot mode support
=================
@ -188,6 +157,26 @@ end, the previous power state is restored.
This logic makes changing boot configuration more robust at the expense of
several reboots in the worst case.
Out-Of-Band inspection
======================
The ``redfish`` hardware type can inspect the bare metal node by querying
Redfish compatible BMC. This process is quick and reliable compared to the
way the ``inspector`` hardware type works i.e. booting bare metal node
into the introspection ramdisk.
.. note::
The ``redfish`` inspect interface relies on the optional parts of the
Redfish specification. Not all Redfish-compliant BMCs might serve the
required information, in which case bare metal node inspection will fail.
.. note::
The ``local_gb`` property cannot always be discovered, for example, when a
node does not have local storage or the Redfish implementation does not
support the required schema. In this case the property will be set to 0.
.. _redfish-virtual-media:
Virtual media boot
@ -226,7 +215,7 @@ BIOS boot mode, it suffice to set ironic boot interface to
:doc:`/admin/drivers/idrac` for more details on this hardware type.
If UEFI boot mode is desired, the user should additionally supply EFI
System Partition image (ESP_), see :doc:`/install/configure-esp` for details.
System Partition image (ESP_), see `Configuring an ESP image`_ for details.
If ``[driver_info]/config_via_floppy`` boolean property of the node is set to
``true``, ironic will create a file with runtime configuration parameters,
@ -282,9 +271,81 @@ with the Wallaby release it's possible to provide a pre-built ISO image:
for backward compatibility.
No customization is currently done to the image, so e.g.
:doc:`/admin/dhcp-less` won't work. :doc:`/install/configure-esp` is also
:doc:`/admin/dhcp-less` won't work. `Configuring an ESP image`_ is also
unnecessary.
Configuring an ESP image
~~~~~~~~~~~~~~~~~~~~~~~~~
An ESP image is an image that contains the necessary bootloader to boot the ISO
in UEFI mode. You will need a GRUB2 image file, as well as Shim for secure
boot. See :ref:`uefi-pxe-grub` for an explanation how to get them.
Then the following script can be used to build an ESP image:
.. code-block:: bash
DEST=/path/to/esp.img
GRUB2=/path/to/grub.efi
SHIM=/path/to/shim.efi
TEMP_MOUNT=$(mktemp -d)
dd if=/dev/zero of=$DEST bs=4096 count=1024
mkfs.fat -s 4 -r 512 -S 4096 $DEST
sudo mount $DEST $TEMP_MOUNT
sudo mkdir -p $DEST/EFI/BOOT
sudo cp "$SHIM" $DEST/EFI/BOOT/BOOTX64.efi
sudo cp "$GRUB2" $DEST/EFI/BOOT/GRUBX64.efi
sudo umount $TEMP_MOUNT
.. note::
If you use an architecture other than x86-64, you'll need to adjust the
destination paths.
.. warning::
If you are using secure boot, you *must* utilize the same SHIM and GRUB
binaries matching your distribution's kernel and ramdisk, otherwise the
Secure Boot "chain of trust" will be broken.
Additionally, if you encounter odd issues UEFI booting with virtual media
which point to the bootloader, verify the appropriate distribution matching
binaries are in use.
The resulting image should be provided via the ``driver_info/bootloader``
ironic node property in form of an image UUID or a URL:
.. code-block:: bash
baremetal node set --driver-info bootloader=<glance-uuid-or-url> node-0
Alternatively, set the bootloader UUID or URL in the configuration file:
.. code-block:: ini
[conductor]
bootloader = <glance-uuid-or-url>
Finally, you need to provide the correct GRUB2 configuration path for your
image. In most cases this path will depend on your distribution, more
precisely, the distribution you took the GRUB2 image from. For example:
CentOS:
.. code-block:: ini
[DEFAULT]
grub_config_path = EFI/centos/grub.cfg
Ubuntu:
.. code-block:: ini
[DEFAULT]
grub_config_path = EFI/ubuntu/grub.cfg
.. note::
Unlike in the script above, these paths are case-sensitive!
.. _redfish-virtual-media-ramdisk:
Virtual Media Ramdisk
@ -328,7 +389,11 @@ or via a link to a raw image:
baremetal node deploy <node name or UUID> \
--config-drive http://example.com/config.img
.. _redfish-https-boot:
Layer 3 or DHCP-less ramdisk booting
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DHCP-less deploy is supported by the Redfish virtual media boot. See
:doc:`/admin/dhcp-less` for more information.
Redfish HTTP(s) Boot
====================
@ -358,52 +423,161 @@ where the node should boot from.
Like the ``redfish-virtual-media`` boot interface, you will need
to create an EFI System Partition image (ESP_), see
:doc:`/install/configure-esp` for details on how to do this.
`Configuring an ESP image`_ for details on how to do this.
Additionally, if you would like to use the ``ramdisk`` deployment
interface, the same basic instructions covered in `Virtual Media Ramdisk`_
apply, just use ``redfish-https`` as the boot_interface, and keep in mind,
no configuration drives exist with the ``redfish-https`` boot interface.
Limitations & Issues
~~~~~~~~~~~~~~~~~~~~
Firmware update using manual cleaning
=====================================
Ironic contains two different ways of providing an HTTP(S) URL
to a remote BMC. The first is Swift, enabled when :oslo.config:option:`redfish.use_swift`
is set to ``true``. Ironic uploads files to Swift, which are then shared as
Temporary Swift URLs. While highly scalable, this method does suffer from
issues where some vendors BMCs reject URLs with **&** or **?** characters.
There is no available workaround to leverage Swift in this state.
The ``redfish`` hardware type supports updating the firmware on nodes using a
manual cleaning step.
When the :oslo.config:option:`redfish.use_swift` setting is set to ``false``, Ironic will house
the files locally in the :oslo.config:option:`deploy.http_root` folder structure, and then
generate a URL pointing the BMC to connect to the HTTP service configured
via :oslo.config:option:`deploy.http_url`.
The firmware update cleaning step allows one or more firmware updates to be
applied to a node. If multiple updates are specified, then they are applied
sequentially in the order given. The server is rebooted once per update.
If a failure occurs, the cleaning step immediately fails which may result
in some updates not being applied. If the node is placed into maintenance
mode while a firmware update cleaning step is running that is performing
multiple firmware updates, the update in progress will complete, and processing
of the remaining updates will pause. When the node is taken out of maintenance
mode, processing of the remaining updates will continue.
Out-Of-Band inspection
======================
When updating the BMC firmware, the BMC may become unavailable for a period of
time as it resets. In this case, it may be desireable to have the cleaning step
wait after the update has been applied before indicating that the
update was successful. This allows the BMC time to fully reset before further
operations are carried out against it. To cause the cleaning step to wait after
applying an update, an optional ``wait`` argument may be specified in the
firmware image dictionary. The value of this argument indicates the number of
seconds to wait following the update. If the ``wait`` argument is not
specified, then this is equivalent to ``wait 0``, meaning that it will not
wait and immediately proceed with the next firmware update if there is one,
or complete the cleaning step if not.
The ``update_firmware`` cleaning step accepts JSON in the following format::
[{
"interface": "management",
"step": "update_firmware",
"args": {
"firmware_images":[
{
"url": "<url_to_firmware_image1>",
"checksum": "<checksum for image, uses SHA1, SHA256, or SHA512>",
"source": "<optional override source setting for image>",
"wait": <number_of_seconds_to_wait>
},
{
"url": "<url_to_firmware_image2>"
},
...
]
}
}]
The different attributes of the ``update_firmware`` cleaning step are as follows:
.. csv-table::
:header: "Attribute", "Description"
:widths: 30, 120
"``interface``", "Interface of the cleaning step. Must be ``management`` for firmware update"
"``step``", "Name of cleaning step. Must be ``update_firmware`` for firmware update"
"``args``", "Keyword-argument entry (<name>: <value>) being passed to cleaning step"
"``args.firmware_images``", "Ordered list of dictionaries of firmware images to be applied"
Each firmware image dictionary, is of the form::
{
"url": "<URL of firmware image file>",
"checksum": "<checksum for image, uses SHA1>",
"source": "<Optional override source setting for image>",
"wait": <Optional time in seconds to wait after applying update>
}
The ``url`` and ``checksum`` arguments in the firmware image dictionary are
mandatory, while the ``source`` and ``wait`` arguments are optional.
For ``url`` currently ``http``, ``https``, ``swift`` and ``file`` schemes are
supported.
``source`` corresponds to ``[redfish]firmware_source`` and by setting it here,
it is possible to override global setting per firmware image in clean step
arguments.
The ``redfish`` hardware type can inspect the bare metal node by querying
Redfish compatible BMC. This process is quick and reliable compared to the
way the ``agent`` hardware type works i.e. booting bare metal node into
the introspection ramdisk. The inspection collects various hardware
information including LLDP (Link Layer Discovery Protocol) data when
available from the BMC, such as chassis ID, port ID, system name, system
description, system capabilities, and management addresses.
.. note::
At the present time, targets for the firmware update cannot be specified.
In testing, the BMC applied the update to all applicable targets on the
node. It is assumed that the BMC knows what components a given firmware
image is applicable to.
The ``redfish`` inspect interface relies on the optional parts of the
Redfish specification. Not all Redfish-compliant BMCs might serve the
required information, in which case bare metal node inspection will fail.
To perform a firmware update, first download the firmware to a web server,
Swift or filesystem that the Ironic conductor or BMC has network access to.
This could be the ironic conductor web server or another web server on the BMC
network. Using a web browser, curl, or similar tool on a server that has
network access to the BMC or Ironic conductor, try downloading the firmware to
verify that the URLs are correct and that the web server is configured
properly.
Next, construct the JSON for the firmware update cleaning step to be executed.
When launching the firmware update, the JSON may be specified on the command
line directly or in a file. The following example shows one cleaning step that
installs four firmware updates. All except 3rd entry that has explicit
``source`` added, uses setting from ``[redfish]firmware_source`` to determine
if and where to stage the files::
[{
"interface": "management",
"step": "update_firmware",
"args": {
"firmware_images":[
{
"url": "http://192.0.2.10/BMC_4_22_00_00.EXE",
"checksum": "<sha1-checksum-of-the-file>",
"wait": 300
},
{
"url": "https://192.0.2.10/NIC_19.0.12_A00.EXE",
"checksum": "<sha1-checksum-of-the-file>"
},
{
"url": "file:///firmware_images/idrac/9/PERC_WN64_6.65.65.65_A00.EXE",
"checksum": "<sha1-checksum-of-the-file>",
"source": "http"
},
{
"url": "swift://firmware_container/BIOS_W8Y0W_WN64_2.1.7.EXE",
"checksum": "<sha1-checksum-of-the-file>"
}
]
}
}]
Finally, launch the firmware update cleaning step against the node. The
following example assumes the above JSON is in a file named
``firmware_update.json``::
baremetal node clean <ironic_node_uuid> --clean-steps firmware_update.json
In the following example, the JSON is specified directly on the command line::
baremetal node clean <ironic_node_uuid> --clean-steps '[{"interface": "management", "step": "update_firmware", "args": {"firmware_images":[{"url": "http://192.0.2.10/BMC_4_22_00_00.EXE", "wait": 300}, {"url": "https://192.0.2.10/NIC_19.0.12_A00.EXE"}]}}]'
.. note::
Firmware updates may take some time to complete. If a firmware update
cleaning step consistently times out, then consider performing fewer
firmware updates in the cleaning step or increasing
``clean_callback_timeout`` in ironic.conf to increase the timeout value.
The ``local_gb`` property cannot always be discovered, for example, when a
node does not have local storage or the Redfish implementation does not
support the required schema. In this case the property will be set to 0.
.. _redfish-bios-settings:
.. warning::
Warning: Removing power from a server while it is in the process of updating
firmware may result in devices in the server, or the server itself becoming
inoperable.
Retrieving BIOS Settings
========================
@ -429,15 +603,163 @@ settings. The following fields will be returned in the BIOS API
"``unique``", "The setting is specific to this node"
"``reset_required``", "After changing this setting a node reboot is required"
Further topics
==============
.. _node-vendor-passthru-methods:
.. toctree::
Node Vendor Passthru Methods
============================
.. csv-table::
:header: "Method", "Description"
:widths: 25, 120
"``create_subscription``", "Create a new subscription on the Node"
"``delete_subscription``", "Delete a subscription of a Node"
"``get_all_subscriptions``", "List all subscriptions of a Node"
"``get_subscription``", "Show a single subscription of a Node"
"``eject_vmedia``", "Eject attached virtual media from a Node"
Create Subscription
~~~~~~~~~~~~~~~~~~~
.. csv-table:: Request
:header: "Name", "In", "Type", "Description"
:widths: 25, 15, 15, 90
"Destination", "body", "string", "The URI of the destination Event Service"
"EventTypes (optional)", "body", "array", "List of ypes of events that shall be sent to the destination"
"Context (optional)", "body", "string", "A client-supplied string that is stored with the event destination
subscription"
"Protocol (optional)", "body", "string", "The protocol type that the event will use for sending
the event to the destination"
Example JSON to use in ``create_subscription``::
{
"Destination": "https://someurl",
"EventTypes": ["Alert"],
"Context": "MyProtocol",
"args": "Redfish"
}
Delete Subscription
~~~~~~~~~~~~~~~~~~~
.. csv-table:: Request
:header: "Name", "In", "Type", "Description"
:widths: 21, 21, 21, 37
"id", "body", "string", "The Id of the subscription generated by the BMC "
Example JSON to use in ``delete_subscription``::
{
"id": "<id of the subscription generated by the BMC>"
}
Get Subscription
~~~~~~~~~~~~~~~~
.. csv-table:: Request
:header: "Name", "In", "Type", "Description"
:widths: 21, 21, 21, 37
"id", "body", "string", "The Id of the subscription generated by the BMC "
Example JSON to use in ``get_subscription``::
{
"id": "<id of the subscription generated by the BMC>"
}
Get All Subscriptions
~~~~~~~~~~~~~~~~~~~~~
The ``get_all_subscriptions`` doesn't require any parameters.
Eject Virtual Media
~~~~~~~~~~~~~~~~~~~
.. csv-table:: Request
:header: "Name", "In", "Type", "Description"
:widths: 25, 15, 15, 90
"boot_device (optional)", "body", "string", "Type of the device to eject (all devices by default)"
Internal Session Cache
======================
The ``redfish`` hardware type, and derived interfaces, utilizes a built-in
session cache which prevents Ironic from re-authenticating every time
Ironic attempts to connect to the BMC for any reason.
This consists of cached connectors objects which are used and tracked by
a unique consideration of ``redfish_username``, ``redfish_password``,
``redfish_verify_ca``, and finally ``redfish_address``. Changing any one
of those values will trigger a new session to be created.
The ``redfish_system_id`` value is explicitly not considered as Redfish
has a model of use of one BMC to many systems, which is also a model
Ironic supports.
The session cache default size is ``1000`` sessions per conductor.
If you are operating a deployment with a larger number of Redfish
BMCs, it is advised that you do appropriately tune that number.
This can be tuned via the API service configuration file,
``[redfish]connection_cache_size``.
Session Cache Expiration
~~~~~~~~~~~~~~~~~~~~~~~~
By default, sessions remain cached for as long as possible in
memory, as long as they have not experienced an authentication,
connection, or other unexplained error.
Under normal circumstances, the sessions will only be rolled out
of the cache in order of oldest first when the cache becomes full.
There is no time based expiration to entries in the session cache.
Of course, the cache is only in memory, and restarting the
``ironic-conductor`` will also cause the cache to be rebuilt
from scratch. If this is due to any persistent connectivity issue,
this may be sign of an unexpected condition, and please consider
contacting the Ironic developer community for assistance.
Redfish Interoperability Profile
================================
Ironic projects provides Redfish Interoperability Profile located in
``redfish-interop-profiles`` folder at source code root. The Redfish
Interoperability Profile is a JSON document written in a particular format
that serves two purposes.
* It enables the creation of a human-readable document that merges the
profile requirements with the Redfish schema into a single document
for developers or users.
* It allows a conformance test utility to test a Redfish Service
implementation for conformance with the profile.
The JSON document structure is intended to align easily with JSON payloads
retrieved from Redfish Service implementations, to allow for easy comparisons
and conformance testing. Many of the properties defined within this structure
have assumed default values that correspond with the most common use case, so
that those properties can be omitted from the document for brevity.
Validation of Profiles using DMTF tool
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
An open source utility has been created by the Redfish Forum to verify that
a Redfish Service implementation conforms to the requirements included in a
Redfish Interoperability Profile. The Redfish Interop Validator is available
for download from the DMTF's organization on Github at
https://github.com/DMTF/Redfish-Interop-Validator. Refer to instructions in
README on how to configure and run validation.
redfish/metrics
redfish/passthru
redfish/session-cache
redfish/interop
.. _Redfish: http://redfish.dmtf.org/
.. _Sushy: https://opendev.org/openstack/sushy

View file

@ -1,33 +0,0 @@
Redfish Interoperability Profile
================================
The Ironic project provides a Redfish Interoperability Profile located in
``redfish-interop-profiles`` folder at source code root. The Redfish
Interoperability Profile is a JSON document written in a particular format
that serves two purposes:
* It enables the creation of a human-readable document that merges the
profile requirements with the Redfish schema into a single document
for developers or users.
* It allows a conformance test utility to test a Redfish Service
implementation for conformance with the profile.
The JSON document structure is intended to align easily with JSON payloads
retrieved from Redfish Service implementations, to allow for easy comparisons
and conformance testing. Many of the properties defined within this structure
have assumed default values that correspond with the most common use case, so
that those properties can be omitted from the document for brevity.
.. toctree::
OpenStackIronicProfile.v1_1_0
Validation of Profiles using DMTF tool
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
An open source utility has been created by the Redfish Forum to verify that
a Redfish Service implementation conforms to the requirements included in a
Redfish Interoperability Profile. The Redfish Interop Validator is available
for download from the DMTF's organization on Github at
https://github.com/DMTF/Redfish-Interop-Validator. Refer to instructions in
README on how to configure and run validation.

View file

@ -1,203 +0,0 @@
Redfish hardware metrics
========================
The ``redfish`` hardware type supports sending hardware metrics via the
:doc:`notification system </admin/notifications>`. The ``event_type`` field of
a notification will be set to ``hardware.redfish.metrics`` (where ``redfish``
may be replaced by a different driver name for hardware types derived from it).
Enabling redfish hardware metrics requires some ironic.conf configuration file
updates:
.. code-block:: ini
[oslo_messaging_notifications]
# The Drivers(s) to handle sending notifications. Possible
# values are messaging, messagingv2, routing, log, test, noop,
# prometheus_exporter (multi valued)
# Example using the messagingv2 driver:
driver = messagingv2
[sensor_data]
send_sensor_data = true
[metrics]
backend = collector
A full list of ``[oslo_messaging_notifications]`` configuration options can be found in the
`oslo.messaging documentation <https://docs.openstack.org/oslo.messaging/latest/configuration/opts.html#oslo-messaging-notifications>`_
The payload of each notification is a mapping where keys are sensor types
(``Fan``, ``Temperature``, ``Power`` or ``Drive``) and values are also mappings
from sensor identifiers to the sensor data.
Each ``Fan`` payload contains the following fields:
* ``max_reading_range``, ``min_reading_range`` - the range of reading values.
* ``reading``, ``reading_units`` - the current reading and its units.
* ``serial_number`` - the serial number of the fan sensor.
* ``physical_context`` - the context of the sensor, such as ``SystemBoard``.
Can also be ``null`` or just ``Fan``.
Each ``Temperature`` payload contains the following fields:
* ``max_reading_range_temp``, ``min_reading_range_temp`` - the range of reading
values.
* ``reading_celsius`` - the current reading in degrees Celsius.
* ``sensor_number`` - the number of the temperature sensor.
* ``physical_context`` - the context of the sensor, usually reflecting its
location, such as ``CPU``, ``Memory``, ``Intake``, ``PowerSupply`` or
``SystemBoard``. Can also be ``null``.
Each ``Power`` payload contains the following fields:
* ``power_capacity_watts``, ``line_input_voltage``, ``last_power_output_watts``
* ``serial_number`` - the serial number of the power source.
* ``state`` - the power source state: ``enabled``, ``absent`` (``null`` if
unknown).
* ``health`` - the power source health status: ``ok``, ``warning``,
``critical`` (``null`` if unknown).
Each ``Drive`` payload contains the following fields:
* ``name`` - the drive name in the BMC (this is **not** a Linux device name
like ``/dev/sda``).
* ``model`` - the drive model (if known).
* ``capacity_bytes`` - the drive capacity in bytes.
* ``state`` - the drive state: ``enabled``, ``absent`` (``null`` if unknown).
* ``health`` - the drive health status: ``ok``, ``warning``, ``critical``
(``null`` if unknown).
.. note::
Drive payloads are often not available on real hardware.
.. warning::
Metrics collection works by polling several Redfish endpoints on the target
BMC. Some older BMC implementations may have hard rate limits or misbehave
under load. If this is the case for you, you need to reduce the metrics
collection frequency or completely disable it.
Example (Dell)
--------------
.. code-block:: json
{
"message_id": "578628d2-9967-4d33-97ca-7e7c27a76abc",
"publisher_id": "conductor-1.example.com",
"event_type": "hardware.redfish.metrics",
"priority": "INFO",
"payload": {
"message_id": "60653d54-87aa-43b8-a4ed-96d568dd4e96",
"instance_uuid": null,
"node_uuid": "aea161dc-2e96-4535-b003-ca70a4a7bb6d",
"timestamp": "2023-10-22T15:50:26.841964",
"node_name": "dell-430",
"event_type": "hardware.redfish.metrics.update",
"payload": {
"Fan": {
"0x17||Fan.Embedded.1A@System.Embedded.1": {
"identity": "0x17||Fan.Embedded.1A",
"max_reading_range": null,
"min_reading_range": 720,
"reading": 1680,
"reading_units": "RPM",
"serial_number": null,
"physical_context": "SystemBoard",
"state": "enabled",
"health": "ok"
},
"0x17||Fan.Embedded.2A@System.Embedded.1": {
"identity": "0x17||Fan.Embedded.2A",
"max_reading_range": null,
"min_reading_range": 720,
"reading": 3120,
"reading_units": "RPM",
"serial_number": null,
"physical_context": "SystemBoard",
"state": "enabled",
"health": "ok"
},
"0x17||Fan.Embedded.2B@System.Embedded.1": {
"identity": "0x17||Fan.Embedded.2B",
"max_reading_range": null,
"min_reading_range": 720,
"reading": 3000,
"reading_units": "RPM",
"serial_number": null,
"physical_context": "SystemBoard",
"state": "enabled",
"health": "ok"
}
},
"Temperature": {
"iDRAC.Embedded.1#SystemBoardInletTemp@System.Embedded.1": {
"identity": "iDRAC.Embedded.1#SystemBoardInletTemp",
"max_reading_range_temp": 47,
"min_reading_range_temp": -7,
"reading_celsius": 28,
"physical_context": "SystemBoard",
"sensor_number": 4,
"state": "enabled",
"health": "ok"
},
"iDRAC.Embedded.1#CPU1Temp@System.Embedded.1": {
"identity": "iDRAC.Embedded.1#CPU1Temp",
"max_reading_range_temp": 90,
"min_reading_range_temp": 3,
"reading_celsius": 63,
"physical_context": "CPU",
"sensor_number": 14,
"state": "enabled",
"health": "ok"
}
},
"Power": {
"PSU.Slot.1:Power@System.Embedded.1": {
"power_capacity_watts": null,
"line_input_voltage": 206,
"last_power_output_watts": null,
"serial_number": "CNLOD0075324D7",
"state": "enabled",
"health": "ok"
},
"PSU.Slot.2:Power@System.Embedded.1": {
"power_capacity_watts": null,
"line_input_voltage": null,
"last_power_output_watts": null,
"serial_number": "CNLOD0075324E5",
"state": null,
"health": "critical"
}
},
"Drive": {
"Solid State Disk 0:1:0:RAID.Integrated.1-1@System.Embedded.1": {
"name": "Solid State Disk 0:1:0",
"capacity_bytes": 479559942144,
"state": "enabled",
"health": "ok"
},
"Physical Disk 0:1:1:RAID.Integrated.1-1@System.Embedded.1": {
"name": "Physical Disk 0:1:1",
"capacity_bytes": 1799725514752,
"state": "enabled",
"health": "ok"
},
"Physical Disk 0:1:2:RAID.Integrated.1-1@System.Embedded.1": {
"name": "Physical Disk 0:1:2",
"capacity_bytes": 1799725514752,
"state": "enabled",
"health": "ok"
},
"Backplane 1 on Connector 0 of Integrated RAID Controller 1:RAID.Integrated.1-1@System.Embedded.1": {
"name": "Backplane 1 on Connector 0 of Integrated RAID Controller 1",
"capacity_bytes": null,
"state": "enabled",
"health": "ok"
}
}
}
},
"timestamp": "2023-10-22 15:50:36.700458"
}

View file

@ -1,87 +0,0 @@
Node Vendor Passthru Methods
============================
.. csv-table::
:header: "Method", "Description"
:widths: 25, 120
"``create_subscription``", "Create a new subscription on the Node"
"``delete_subscription``", "Delete a subscription of a Node"
"``get_all_subscriptions``", "List all subscriptions of a Node"
"``get_subscription``", "Show a single subscription of a Node"
"``eject_vmedia``", "Eject attached virtual media from a Node"
Create Subscription
~~~~~~~~~~~~~~~~~~~
.. csv-table:: Request
:header: "Name", "In", "Type", "Description"
:widths: 25, 15, 15, 90
"Destination", "body", "string", "The URI of the destination Event Service"
"EventTypes (optional)", "body", "array", "List of types of events that shall be sent to the destination"
"Context (optional)", "body", "string", "A client-supplied string that is stored with the event destination
subscription"
"Protocol (optional)", "body", "string", "The protocol type that the event will use for sending
the event to the destination"
Example JSON to use in ``create_subscription``::
{
"Destination": "https://someurl",
"EventTypes": ["Alert"],
"Context": "MyProtocol",
"args": "Redfish"
}
Delete Subscription
~~~~~~~~~~~~~~~~~~~
.. csv-table:: Request
:header: "Name", "In", "Type", "Description"
:widths: 21, 21, 21, 37
"id", "body", "string", "The Id of the subscription generated by the BMC "
Example JSON to use in ``delete_subscription``::
{
"id": "<id of the subscription generated by the BMC>"
}
Get Subscription
~~~~~~~~~~~~~~~~
.. csv-table:: Request
:header: "Name", "In", "Type", "Description"
:widths: 21, 21, 21, 37
"id", "body", "string", "The Id of the subscription generated by the BMC "
Example JSON to use in ``get_subscription``::
{
"id": "<id of the subscription generated by the BMC>"
}
Get All Subscriptions
~~~~~~~~~~~~~~~~~~~~~
The ``get_all_subscriptions`` doesn't require any parameters.
Eject Virtual Media
~~~~~~~~~~~~~~~~~~~
.. csv-table:: Request
:header: "Name", "In", "Type", "Description"
:widths: 25, 15, 15, 90
"boot_device (optional)", "body", "string", "Type of the device to eject (all devices by default)"

View file

@ -1,38 +0,0 @@
Internal Session Cache
======================
The ``redfish`` hardware type, and derived interfaces, utilizes a built-in
session cache which prevents Ironic from re-authenticating every time
Ironic attempts to connect to the BMC for any reason.
This consists of cached connectors objects which are used and tracked by
a unique consideration of ``redfish_username``, ``redfish_password``,
``redfish_verify_ca``, and finally ``redfish_address``. Changing any one
of those values will trigger a new session to be created.
The ``redfish_system_id`` value is explicitly not considered as Redfish
has a model of use of one BMC to many systems, which is also a model
Ironic supports.
The session cache default size is ``1000`` sessions per conductor.
If you are operating a deployment with a larger number of Redfish
BMCs, it is advised that you do appropriately tune that number.
This can be tuned via the API service configuration file,
:oslo.config:option:`redfish.connection_cache_size`.
Session Cache Expiration
~~~~~~~~~~~~~~~~~~~~~~~~
By default, sessions remain cached for as long as possible in
memory, as long as they have not experienced an authentication,
connection, or other unexplained error.
Under normal circumstances, the sessions will only be rolled out
of the cache in order of oldest first when the cache becomes full.
There is no time based expiration to entries in the session cache.
Of course, the cache is only in memory, and restarting the
``ironic-conductor`` will also cause the cache to be rebuilt
from scratch. If this is due to any persistent connectivity issue,
this may be sign of an unexpected condition, and please consider
contacting the Ironic developer community for assistance.

Some files were not shown because too many files have changed in this diff Show more