1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
|
%global _empty_manifest_terminate_build 0
Name: python-fabric-am-handlers
Version: 1.4.3
Release: 1
Summary: Fabric Aggregate Manager Handlers
License: MIT License
URL: https://github.com/fabric-testbed/AMHandlers
Source0: https://mirrors.nju.edu.cn/pypi/web/packages/f1/4e/89934d0210a467c1917abddd65689e8082889efc198145a0ffe2eaa121da/fabric-am-handlers-1.4.3.tar.gz
BuildArch: noarch
Requires: python3-ansible
Requires: python3-paramiko
Requires: python3-fabric-cf
%description
[](https://pypi.org/project/fabric-am-handlers/)
# AMHandlers
## Aggregate Manager
An aggregate manager controls access to the substrate components. It controls some set of infrastructure resources in a particular site consisting of a set of servers, storage units, network elements or other components under common ownership and control. AMs inform brokers about available resources by passing to the resource advertisement information models. AMs may be associated with more than one broker and the partitioning of resources between brokers is the decision left to the AM. Oversubscription is possible, depending on the deployment needs.
FABRIC enables a substrate provider to outsource resource arbitration and calendar scheduling to a broker. By delegating resources to the broker, the AM consents to the broker’s policies, and agrees to try to honor reservations issued by the broker if the user has authorization on the AM.
Besides, common code each AM type has resource-specific modules that determine its resource allocation behavior (Resource Management Policy) and the specific actions it takes to provision a sliver (Resource Handler). Both plugins are invoked by AM common core code based on the resource type or type of request being considered.
## Handlers
The AM up calls a handler interface to setup and teardown each sliver. Resource handlers perform any substrate-specific configuration actions needed to implement slivers. The handler interface includes a probe method to poll the current status of a sliver, and modify to adjust attributes of a sliver.
Handlers are registered and selected by resource type. Each handler invocation executes in an independent thread, so handlers may block for slow configuration actions. Handlers are invoked through a class called HandlerProcessor, which can invoke an interpreter for a handler scripting language. The handlers will be written in the Python scripting language.
Each handler implements 3 basic operation types for a resource:
- Create - provision a resource
- Example - create a VM or a bare-metal node, or a network connection
- Delete - un-provision a resource
- Undo the create above
- Modify - modify the state of a resource
- Modify a property of the VM, or a network connection (e.g. change bandwidth)
Each operation can have subcommands and parameters that determine the details of the actions taken, some of which are discussed below. These parameters help ‘stitch’ multiple slivers together. A canonical example is the passing of network information from the handler provisioning the network to the handler provisioning a compute node so that the compute node ends up with correct network configuration (e.g. attached to a correct VLAN). Specific parameters for operations are a matter of convention between the resource management policy and the plugin.
Handlers receive the parameters as part of the provisioning workflow (sequence of redeem operations) executed by the Orchestrator on the AMs. They can also pass information back to the Orchestrator about reserved resources as part of the standard exchange of messages between AM and Orchestrator during the provisioning.
## Playbooks
Handlers use Ansible Playbooks for provisioning.
## Interface and design

## Configuration
AM Handlers require following configuration to be setup:
- [Inventory](./fabric_am/playbooks/inventory) information for the headnode
- Handler config file (VM Handler example depicted below)
### VM Handler Config File
VM Handler config file can be found `fabric_am/config/vm_handler_config.yml`.
It describes the Playbook location and names for specific operations.
```
playbooks:
location: /etc/fabric/actor/playbooks
inventory_location: /etc/fabric/actor/playbooks/inventory
VM: head_vm_provisioning.yml
GPU: worker_pci_provisioning.yml
SmartNIC: worker_pci_provisioning.yml
SharedNIC: worker_pci_provisioning.yml
FPGA: worker_pci_provisioning.yml
NVME: worker_pci_provisioning.yml
```
### PCI Devices support
PCI devices are passed to the VMs using PCI pass through. FABRIC also support `SRIOV` functions which are also passed on to the VMs using PCI pass through. In order to handle the hairpin connections on `SRIOV`, `VEPA` mode is enabled on the underlying bridge for the dedicated card used to host the virtual functions.
```
[kissel@uky-w1 ~]$ sudo bridge link set dev ens6f1 hwmode vepa
```
#### What is VEPA?
In the standard mode the software upgrade to the `VEB` in the hypervisor simply forces each VM frame out to the external switch regardless of destination. This causes no change for destination MAC addresses external to the host, but for destinations within the host (another VM in the same VLAN) it forces that traffic to the upstream switch which forwards it back instead of handling it internally, called a hairpin turn.) It's this hairpin turn that causes the requirement for the upstream switch to have updated firmware, typical STP behavior prevents a switch from forwarding a frame back down the port it was received on. The firmware update allows the negotiation between the physical host and the upstream switch of a VEPA port which then allows this hairpin turn.
VEPA simply forces VM traffic to be handled by an external switch. This allows each VM frame flow to be monitored managed and secured with all of the tools available to the physical switch. This does not provide any type of individual tunnel for the VM, or a configurable switchport but does allow for things like flow statistic gathering, ACL enforcement, etc. Basically we're just pushing the MAC forwarding decision to the physical switch and allowing that switch to perform whatever functions it has available on each transaction. The drawback here is that we are now performing one ingress and egress for each frame that was previously handled internally. This means that there are bandwidth and latency considerations to be made. Functions like Single Root I/O Virtualization (SR/IOV) and Direct Path I/O can alleviate some of the latency issues when implementing this. Like any technology there are typically trade offs that must be weighed. In this case the added control and functionality should outweigh the bandwidth and latency additions.
More details about `VEPA` can be found [here](https://www.ieee802.org/1/files/public/docs2009/new-hudson-vepa_seminar-20090514d.pdf)
%package -n python3-fabric-am-handlers
Summary: Fabric Aggregate Manager Handlers
Provides: python-fabric-am-handlers
BuildRequires: python3-devel
BuildRequires: python3-setuptools
BuildRequires: python3-pip
%description -n python3-fabric-am-handlers
[](https://pypi.org/project/fabric-am-handlers/)
# AMHandlers
## Aggregate Manager
An aggregate manager controls access to the substrate components. It controls some set of infrastructure resources in a particular site consisting of a set of servers, storage units, network elements or other components under common ownership and control. AMs inform brokers about available resources by passing to the resource advertisement information models. AMs may be associated with more than one broker and the partitioning of resources between brokers is the decision left to the AM. Oversubscription is possible, depending on the deployment needs.
FABRIC enables a substrate provider to outsource resource arbitration and calendar scheduling to a broker. By delegating resources to the broker, the AM consents to the broker’s policies, and agrees to try to honor reservations issued by the broker if the user has authorization on the AM.
Besides, common code each AM type has resource-specific modules that determine its resource allocation behavior (Resource Management Policy) and the specific actions it takes to provision a sliver (Resource Handler). Both plugins are invoked by AM common core code based on the resource type or type of request being considered.
## Handlers
The AM up calls a handler interface to setup and teardown each sliver. Resource handlers perform any substrate-specific configuration actions needed to implement slivers. The handler interface includes a probe method to poll the current status of a sliver, and modify to adjust attributes of a sliver.
Handlers are registered and selected by resource type. Each handler invocation executes in an independent thread, so handlers may block for slow configuration actions. Handlers are invoked through a class called HandlerProcessor, which can invoke an interpreter for a handler scripting language. The handlers will be written in the Python scripting language.
Each handler implements 3 basic operation types for a resource:
- Create - provision a resource
- Example - create a VM or a bare-metal node, or a network connection
- Delete - un-provision a resource
- Undo the create above
- Modify - modify the state of a resource
- Modify a property of the VM, or a network connection (e.g. change bandwidth)
Each operation can have subcommands and parameters that determine the details of the actions taken, some of which are discussed below. These parameters help ‘stitch’ multiple slivers together. A canonical example is the passing of network information from the handler provisioning the network to the handler provisioning a compute node so that the compute node ends up with correct network configuration (e.g. attached to a correct VLAN). Specific parameters for operations are a matter of convention between the resource management policy and the plugin.
Handlers receive the parameters as part of the provisioning workflow (sequence of redeem operations) executed by the Orchestrator on the AMs. They can also pass information back to the Orchestrator about reserved resources as part of the standard exchange of messages between AM and Orchestrator during the provisioning.
## Playbooks
Handlers use Ansible Playbooks for provisioning.
## Interface and design

## Configuration
AM Handlers require following configuration to be setup:
- [Inventory](./fabric_am/playbooks/inventory) information for the headnode
- Handler config file (VM Handler example depicted below)
### VM Handler Config File
VM Handler config file can be found `fabric_am/config/vm_handler_config.yml`.
It describes the Playbook location and names for specific operations.
```
playbooks:
location: /etc/fabric/actor/playbooks
inventory_location: /etc/fabric/actor/playbooks/inventory
VM: head_vm_provisioning.yml
GPU: worker_pci_provisioning.yml
SmartNIC: worker_pci_provisioning.yml
SharedNIC: worker_pci_provisioning.yml
FPGA: worker_pci_provisioning.yml
NVME: worker_pci_provisioning.yml
```
### PCI Devices support
PCI devices are passed to the VMs using PCI pass through. FABRIC also support `SRIOV` functions which are also passed on to the VMs using PCI pass through. In order to handle the hairpin connections on `SRIOV`, `VEPA` mode is enabled on the underlying bridge for the dedicated card used to host the virtual functions.
```
[kissel@uky-w1 ~]$ sudo bridge link set dev ens6f1 hwmode vepa
```
#### What is VEPA?
In the standard mode the software upgrade to the `VEB` in the hypervisor simply forces each VM frame out to the external switch regardless of destination. This causes no change for destination MAC addresses external to the host, but for destinations within the host (another VM in the same VLAN) it forces that traffic to the upstream switch which forwards it back instead of handling it internally, called a hairpin turn.) It's this hairpin turn that causes the requirement for the upstream switch to have updated firmware, typical STP behavior prevents a switch from forwarding a frame back down the port it was received on. The firmware update allows the negotiation between the physical host and the upstream switch of a VEPA port which then allows this hairpin turn.
VEPA simply forces VM traffic to be handled by an external switch. This allows each VM frame flow to be monitored managed and secured with all of the tools available to the physical switch. This does not provide any type of individual tunnel for the VM, or a configurable switchport but does allow for things like flow statistic gathering, ACL enforcement, etc. Basically we're just pushing the MAC forwarding decision to the physical switch and allowing that switch to perform whatever functions it has available on each transaction. The drawback here is that we are now performing one ingress and egress for each frame that was previously handled internally. This means that there are bandwidth and latency considerations to be made. Functions like Single Root I/O Virtualization (SR/IOV) and Direct Path I/O can alleviate some of the latency issues when implementing this. Like any technology there are typically trade offs that must be weighed. In this case the added control and functionality should outweigh the bandwidth and latency additions.
More details about `VEPA` can be found [here](https://www.ieee802.org/1/files/public/docs2009/new-hudson-vepa_seminar-20090514d.pdf)
%package help
Summary: Development documents and examples for fabric-am-handlers
Provides: python3-fabric-am-handlers-doc
%description help
[](https://pypi.org/project/fabric-am-handlers/)
# AMHandlers
## Aggregate Manager
An aggregate manager controls access to the substrate components. It controls some set of infrastructure resources in a particular site consisting of a set of servers, storage units, network elements or other components under common ownership and control. AMs inform brokers about available resources by passing to the resource advertisement information models. AMs may be associated with more than one broker and the partitioning of resources between brokers is the decision left to the AM. Oversubscription is possible, depending on the deployment needs.
FABRIC enables a substrate provider to outsource resource arbitration and calendar scheduling to a broker. By delegating resources to the broker, the AM consents to the broker’s policies, and agrees to try to honor reservations issued by the broker if the user has authorization on the AM.
Besides, common code each AM type has resource-specific modules that determine its resource allocation behavior (Resource Management Policy) and the specific actions it takes to provision a sliver (Resource Handler). Both plugins are invoked by AM common core code based on the resource type or type of request being considered.
## Handlers
The AM up calls a handler interface to setup and teardown each sliver. Resource handlers perform any substrate-specific configuration actions needed to implement slivers. The handler interface includes a probe method to poll the current status of a sliver, and modify to adjust attributes of a sliver.
Handlers are registered and selected by resource type. Each handler invocation executes in an independent thread, so handlers may block for slow configuration actions. Handlers are invoked through a class called HandlerProcessor, which can invoke an interpreter for a handler scripting language. The handlers will be written in the Python scripting language.
Each handler implements 3 basic operation types for a resource:
- Create - provision a resource
- Example - create a VM or a bare-metal node, or a network connection
- Delete - un-provision a resource
- Undo the create above
- Modify - modify the state of a resource
- Modify a property of the VM, or a network connection (e.g. change bandwidth)
Each operation can have subcommands and parameters that determine the details of the actions taken, some of which are discussed below. These parameters help ‘stitch’ multiple slivers together. A canonical example is the passing of network information from the handler provisioning the network to the handler provisioning a compute node so that the compute node ends up with correct network configuration (e.g. attached to a correct VLAN). Specific parameters for operations are a matter of convention between the resource management policy and the plugin.
Handlers receive the parameters as part of the provisioning workflow (sequence of redeem operations) executed by the Orchestrator on the AMs. They can also pass information back to the Orchestrator about reserved resources as part of the standard exchange of messages between AM and Orchestrator during the provisioning.
## Playbooks
Handlers use Ansible Playbooks for provisioning.
## Interface and design

## Configuration
AM Handlers require following configuration to be setup:
- [Inventory](./fabric_am/playbooks/inventory) information for the headnode
- Handler config file (VM Handler example depicted below)
### VM Handler Config File
VM Handler config file can be found `fabric_am/config/vm_handler_config.yml`.
It describes the Playbook location and names for specific operations.
```
playbooks:
location: /etc/fabric/actor/playbooks
inventory_location: /etc/fabric/actor/playbooks/inventory
VM: head_vm_provisioning.yml
GPU: worker_pci_provisioning.yml
SmartNIC: worker_pci_provisioning.yml
SharedNIC: worker_pci_provisioning.yml
FPGA: worker_pci_provisioning.yml
NVME: worker_pci_provisioning.yml
```
### PCI Devices support
PCI devices are passed to the VMs using PCI pass through. FABRIC also support `SRIOV` functions which are also passed on to the VMs using PCI pass through. In order to handle the hairpin connections on `SRIOV`, `VEPA` mode is enabled on the underlying bridge for the dedicated card used to host the virtual functions.
```
[kissel@uky-w1 ~]$ sudo bridge link set dev ens6f1 hwmode vepa
```
#### What is VEPA?
In the standard mode the software upgrade to the `VEB` in the hypervisor simply forces each VM frame out to the external switch regardless of destination. This causes no change for destination MAC addresses external to the host, but for destinations within the host (another VM in the same VLAN) it forces that traffic to the upstream switch which forwards it back instead of handling it internally, called a hairpin turn.) It's this hairpin turn that causes the requirement for the upstream switch to have updated firmware, typical STP behavior prevents a switch from forwarding a frame back down the port it was received on. The firmware update allows the negotiation between the physical host and the upstream switch of a VEPA port which then allows this hairpin turn.
VEPA simply forces VM traffic to be handled by an external switch. This allows each VM frame flow to be monitored managed and secured with all of the tools available to the physical switch. This does not provide any type of individual tunnel for the VM, or a configurable switchport but does allow for things like flow statistic gathering, ACL enforcement, etc. Basically we're just pushing the MAC forwarding decision to the physical switch and allowing that switch to perform whatever functions it has available on each transaction. The drawback here is that we are now performing one ingress and egress for each frame that was previously handled internally. This means that there are bandwidth and latency considerations to be made. Functions like Single Root I/O Virtualization (SR/IOV) and Direct Path I/O can alleviate some of the latency issues when implementing this. Like any technology there are typically trade offs that must be weighed. In this case the added control and functionality should outweigh the bandwidth and latency additions.
More details about `VEPA` can be found [here](https://www.ieee802.org/1/files/public/docs2009/new-hudson-vepa_seminar-20090514d.pdf)
%prep
%autosetup -n fabric-am-handlers-1.4.3
%build
%py3_build
%install
%py3_install
install -d -m755 %{buildroot}/%{_pkgdocdir}
if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
pushd %{buildroot}
if [ -d usr/lib ]; then
find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/lib64 ]; then
find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/bin ]; then
find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/sbin ]; then
find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
fi
touch doclist.lst
if [ -d usr/share/man ]; then
find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
fi
popd
mv %{buildroot}/filelist.lst .
mv %{buildroot}/doclist.lst .
%files -n python3-fabric-am-handlers -f filelist.lst
%dir %{python3_sitelib}/*
%files help -f doclist.lst
%{_docdir}/*
%changelog
* Wed May 31 2023 Python_Bot <Python_Bot@openeuler.org> - 1.4.3-1
- Package Spec generated
|