OpenStack Yoga版安装笔记(十四)启动一个实例

1、官方文档

OpenStack Installation Guideicon-default.png?t=O83Ahttps://docs.openstack.org/install-guide/

本次安装是在Ubuntu 22.04上进行,基本按照OpenStack Installation Guide顺序执行,主要内容包括:

  • 环境安装 (已完成)
  • OpenStack服务安装
    • keyston安装(已完成)
    • glance安装 (已完成)
    • placement安装(已完成) 
    • nova安装(已安装)
    • neutron安装(已完成)
  • 启动一个实例 ◄──

注:Openstack官方网站页面进行了调整,Yoga的相关组件安装可以参考:

OpenStack Docs: Yoga Installation Guides

2、Create virtual networks(Provider network)

安装说明:

Openstack中,instance就是指vm(virtual machine)。launch an instance就是创建并启动一个虚机。

在前面的neutron安装中,介绍了两种网络拓扑方案,一个是provider network,另一个是self-service networks。

根据第一个方案即provider network网络拓扑创建命名为provider的network,然后创建命名为provider的subnet,创建的虚机位于这个subnet。该network通过bridge连接到物理网络端口(创建network时,指定provider-physical-network参数)。这个网络包括一个DHCP服务器为创建的虚机动态分配IP地址。

首先记录controller node 和 compute1 node的ip地址信息,用于比较信息变化。

controller: 

root@controller:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:a8:e0:3c brd ff:ff:ff:ff:ff:ffaltname enp2s1inet 10.0.20.11/24 brd 10.0.20.255 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:fea8:e03c/64 scope link valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:a8:e0:46 brd ff:ff:ff:ff:ff:ffaltname enp2s2inet6 fe80::20c:29ff:fea8:e046/64 scope link valid_lft forever preferred_lft forever
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000link/ether 52:54:00:7b:e8:20 brd ff:ff:ff:ff:ff:ffinet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0valid_lft forever preferred_lft forever

virbr0 是 KVM 默认创建的一个 Bridge,其作用是为连接其上的虚机网卡提供 NAT 访问外网的功能。

virbr0 默认分配了一个IP 192.168.122.1,并为连接其上的其他虚拟网卡提供 DHCP 服务。

virbr0和本次安装无关。

 compute1:

root@compute1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:51:16:68 brd ff:ff:ff:ff:ff:ffaltname enp2s0inet 10.0.20.12/24 brd 10.0.20.255 scope global ens32valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:fe51:1668/64 scope link valid_lft forever preferred_lft forever
3: ens35: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000link/ether 00:0c:29:51:16:72 brd ff:ff:ff:ff:ff:ffaltname enp2s3
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000link/ether 52:54:00:db:70:49 brd ff:ff:ff:ff:ff:ffinet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0valid_lft forever preferred_lft forever
root@compute1:~# 

2.1 Create the network

root@osclient ~(admin/amdin)# openstack network create  --share --external \
>   --provider-physical-network provider \
>   --provider-network-type flat provider
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | UP                                   |
| availability_zone_hints   |                                      |
| availability_zones        |                                      |
| created_at                | 2024-09-21T09:06:01Z                 |
| description               |                                      |
| dns_domain                | None                                 |
| id                        | 48f2b88e-7740-4d94-a631-69e2abadf25b |
| ipv4_address_scope        | None                                 |
| ipv6_address_scope        | None                                 |
| is_default                | None                                 |
| is_vlan_transparent       | None                                 |
| mtu                       | 1500                                 |
| name                      | provider                             |
| port_security_enabled     | True                                 |
| project_id                | ee65b6c3961747b988ab8bd1cc19fb93     |
| provider:network_type     | flat                                 |
| provider:physical_network | provider                             |
| provider:segmentation_id  | None                                 |
| qos_policy_id             | None                                 |
| revision_number           | 1                                    |
| router:external           | External                             |
| segments                  | None                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tags                      |                                      |
| updated_at                | 2024-09-21T09:06:01Z                 |
+---------------------------+--------------------------------------+
root@osclient ~(admin/amdin)# 
  1. openstack network create:这是OpenStack命令行客户端(CLI)的基本命令,用于创建一个新的网络。

  2. --share:这个选项表示创建的网络是共享的,意味着多个项目(projects,即租户)可以连接到这个网络。

  3. --external:这个选项指定创建的网络是一个外部网络。外部网络通常用于连接虚拟网络到物理网络,使得虚拟机实例能够访问外部网络资源。(定义一个network为external,为产生相关的动作。)

  4. --provider-physical-network provider:这个选项指定物理网络的名称。在这个例子中,物理网络的名称是provider。注意这个物理网络的名称provider是在 /etc/neutron/plugins/ml2/ml2_conf.ini中进行定义的(flat_networks = provider)。至于这个物理网络provider在每台主机的虚拟网络环境在如何映射到该主机的物理网络端口,则在/etc/neutron/plugins/ml2/linuxbridge_agent.ini进行定义(physical_interface_mappings = provider:ens34)。这意味着所创建的network后续在每台主机上按需创建的bridge将关联对应的该主机上的物理网络端口。

  5. --provider-network-type flat:这个选项指定网络类型为flatflat网络类型意味者使用上面指定的物理网络provider的flat网络类型。意味着provider在每台主机关联的物理端口采用untag方式(不采用802.1q的封装),直接将所创建的network上的虚拟机的流量从该物理网络端口发出去。

  6. provider:这是所创建的network名称。在这个命令中,新创建的外部网络将被命名为provider(这个名字可以任意,注意不要和前面的provider混淆)

--provider-physical-network provider概念解读

2.2 Create a subnet on the network

root@osclient ~(admin/amdin)# openstack subnet create --network provider \
>   --allocation-pool start=203.0.113.101,end=203.0.113.250 \
>   --dns-nameserver 8.8.4.4 --gateway 203.0.113.1 \
>   --subnet-range 203.0.113.0/24 provider
+----------------------+--------------------------------------+
| Field                | Value                                |
+----------------------+--------------------------------------+
| allocation_pools     | 203.0.113.101-203.0.113.250          |
| cidr                 | 203.0.113.0/24                       |
| created_at           | 2024-09-26T00:19:21Z                 |
| description          |                                      |
| dns_nameservers      | 8.8.4.4                              |
| dns_publish_fixed_ip | None                                 |
| enable_dhcp          | True                                 |
| gateway_ip           | 203.0.113.1                          |
| host_routes          |                                      |
| id                   | 8279842e-d7c5-4ba6-a037-831e0a72a938 |
| ip_version           | 4                                    |
| ipv6_address_mode    | None                                 |
| ipv6_ra_mode         | None                                 |
| name                 | provider                             |
| network_id           | 48f2b88e-7740-4d94-a631-69e2abadf25b |
| project_id           | ee65b6c3961747b988ab8bd1cc19fb93     |
| revision_number      | 0                                    |
| segment_id           | None                                 |
| service_types        |                                      |
| subnetpool_id        | None                                 |
| tags                 |                                      |
| updated_at           | 2024-09-26T00:19:21Z                 |
+----------------------+--------------------------------------+
root@osclient ~(admin/amdin)# 
root@osclient ~(admin/amdin)# openstack network list
+--------------------------------------+----------+--------------------------------------+
| ID                                   | Name     | Subnets                              |
+--------------------------------------+----------+--------------------------------------+
| 48f2b88e-7740-4d94-a631-69e2abadf25b | provider | 8279842e-d7c5-4ba6-a037-831e0a72a938 |
+--------------------------------------+----------+--------------------------------------+
root@osclient ~(admin/amdin)# openstack subnet list
+--------------------------------------+----------+--------------------------------------+----------------+
| ID                                   | Name     | Network                              | Subnet         |
+--------------------------------------+----------+--------------------------------------+----------------+
| 8279842e-d7c5-4ba6-a037-831e0a72a938 | provider | 48f2b88e-7740-4d94-a631-69e2abadf25b | 203.0.113.0/24 |
+--------------------------------------+----------+--------------------------------------+----------------+
root@osclient ~(admin/amdin)# 
  • openstack subnet create: 这是创建子网的命令。

  • --network provider: 这个参数指定了子网所属的网络。这里的provider是之前创建的网络的名称。

  • --allocation-pool start=203.0.113.101,end=203.0.113.250: 这个参数定义了子网中的IP地址分配池。这意味着从203.0.113.101到203.0.113.250的IP地址将被分配给子网中的设备。

  • --dns-nameserver 8.8.4.4: 这个参数指定了子网的DNS服务器地址。在这个例子中,使用了Google提供的公共DNS服务器8.8.4.4。

  • --gateway 203.0.113.1: 这个参数指定了子网的网关地址。网关是子网中用于路由流量到其他网络的设备。在这个例子中,网关地址被设置为203.0.113.1。

  • --subnet-range 203.0.113.0/24: 这个参数定义了子网的范围。203.0.113.0/24表示子网的网络地址是203.0.113.0,子网掩码是255.255.255.0(/24),这意味着子网可以包含256个IP地址(从203.0.113.1到203.0.113.254,其中.0是网络地址,.255是广播地址,通常不分配给设备)。

总结来说,这个命令是在OpenStack中创建一个子网,该子网属于名为provider的网络,具有指定的IP地址范围、DNS服务器、网关,以及一个IP地址分配池。

此时,Openstack user adminproject admin创建的network/subnet为:

networ/subnet示意

实际在主机的网络操作,在controller node上会针对"provider" network创建一个network namespace,运行dhcp服务,专门用于这个network的dhcp服务(因为dhcp agent安装在controller node);

同时在controller node针对"provider" network,创建一个bridge,dhcp的netns会连接到这个bridge。

root@controller:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:a8:e0:3c brd ff:ff:ff:ff:ff:ffaltname enp2s1inet 10.0.20.11/24 brd 10.0.20.255 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:fea8:e03c/64 scope link valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master brq48f2b88e-77 state UP group default qlen 1000link/ether 00:0c:29:a8:e0:46 brd ff:ff:ff:ff:ff:ffaltname enp2s2inet6 fe80::20c:29ff:fea8:e046/64 scope link valid_lft forever preferred_lft forever
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000link/ether 52:54:00:7b:e8:20 brd ff:ff:ff:ff:ff:ffinet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0valid_lft forever preferred_lft forever
5: tapa51b2fe4-04@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master brq48f2b88e-77 state UP group default qlen 1000link/ether ce:a2:22:a5:77:6a brd ff:ff:ff:ff:ff:ff link-netns qdhcp-48f2b88e-7740-4d94-a631-69e2abadf25b
6: brq48f2b88e-77: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000link/ether 36:f4:3e:a8:e0:c3 brd ff:ff:ff:ff:ff:ffinet6 fe80::34f4:3eff:fea8:e0c3/64 scope link valid_lft forever preferred_lft forever
root@controller:~# brctl show
bridge name     bridge id               STP enabled     interfaces
brq48f2b88e-77          8000.36f43ea8e0c3       no              ens34tapa51b2fe4-04
virbr0          8000.5254007be820       yes
root@controller:~# ip netns exec qdhcp-48f2b88e-7740-4d94-a631-69e2abadf25b ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ns-a51b2fe4-04@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000link/ether fa:16:3e:5b:d6:a5 brd ff:ff:ff:ff:ff:ff link-netnsid 0inet 203.0.113.101/24 brd 203.0.113.255 scope global ns-a51b2fe4-04valid_lft forever preferred_lft foreverinet 169.254.169.254/32 brd 169.254.169.254 scope global ns-a51b2fe4-04valid_lft forever preferred_lft foreverinet6 fe80::a9fe:a9fe/128 scope link valid_lft forever preferred_lft foreverinet6 fe80::f816:3eff:fe5b:d6a5/64 scope link valid_lft forever preferred_lft forever
root@controller:~# root@controller:~# ip netns exec qdhcp-48f2b88e-7740-4d94-a631-69e2abadf25b ps -aux | grep dns
libvirt+    1237  0.0  0.0  10084   392 ?        S    Sep25   0:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper
root        1238  0.0  0.0  10084   392 ?        S    Sep25   0:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper
nobody      6280  0.0  0.0  10504   408 ?        S    00:19   0:00 dnsmasq --no-hosts --no-resolv --pid-file=/var/lib/neutron/dhcp/48f2b88e-7740-4d94-a631-69e2abadf25b/pid --dhcp-hostsfile=/var/lib/neutron/dhcp/48f2b88e-7740-4d94-a631-69e2abadf25b/host --addn-hosts=/var/lib/neutron/dhcp/48f2b88e-7740-4d94-a631-69e2abadf25b/addn_hosts --dhcp-optsfile=/var/lib/neutron/dhcp/48f2b88e-7740-4d94-a631-69e2abadf25b/opts --dhcp-leasefile=/var/lib/neutron/dhcp/48f2b88e-7740-4d94-a631-69e2abadf25b/leases --dhcp-match=set:ipxe,175 --dhcp-userclass=set:ipxe6,iPXE --local-service --bind-dynamic --dhcp-range=set:subnet-8279842e-d7c5-4ba6-a037-831e0a72a938,203.0.113.0,static,255.255.255.0,86400s --dhcp-option-force=option:mtu,1500 --dhcp-lease-max=256 --conf-file=/dev/null --domain=openstacklocal
root       12274  0.0  0.0   4024  2096 pts/0    S+   02:26   0:00 grep --color=auto dns
root@controller:~# 

3、Create m1.nano flavor

root@osclient ~(admin/amdin)# openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
+----------------------------+---------+
| Field                      | Value   |
+----------------------------+---------+
| OS-FLV-DISABLED:disabled   | False   |
| OS-FLV-EXT-DATA:ephemeral  | 0       |
| description                | None    |
| disk                       | 1       |
| id                         | 0       |
| name                       | m1.nano |
| os-flavor-access:is_public | True    |
| properties                 |         |
| ram                        | 64      |
| rxtx_factor                | 1.0     |
| swap                       |         |
| vcpus                      | 1       |
+----------------------------+---------+
root@osclient ~(admin/amdin)# 

这个命令创建了一个名为 m1.nano 的虚拟机规格,它具有1个CPU核心、64MB的内存和1GB的磁盘空间。这个规格可以用于启动资源需求非常低的虚拟机实例。

4、Generate a key pair 

1、在启动虚拟机实例之前,需要将user的公钥添加到云计算平台的计算服务中。这是为了设置SSH公钥认证,以便在实例(即虚拟机)启动后能够安全地连接到虚拟机。

这里采用user "myuser"登录进project "myproject"创建虚机。

root@osclient:~# cat demo-openrc 
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=openstack
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
export PS1='\u@\h \W(myproject/myuser)\$ '
root@osclient:~# source demo-openrc 
root@osclient ~(myproject/myuser)# pwd
/root
root@osclient ~(myproject/myuser)# 

2、生成SSH密钥对

root@osclient ~(myproject/myuser)# ssh-keygen -q -N ""
Enter file in which to save the key (/root/.ssh/id_rsa): 
root@osclient ~(myproject/myuser)# ls .ssh
authorized_keys  id_rsa  id_rsa.pub  known_hosts
root@osclient ~(myproject/myuser)# cat .ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLdHcjoGyJ1sZPf8uNgncglzcVpgTMphK+jZecOvuAWZcr224pevVaa7OCCjHY10WjxG94It/ZhO7s1PYK6bmV/3p116h1CypK4URXg8u3FV6nEWk4lD/bykVY6vyo1GyNlzXiYM4g5b+Q1B0q2/6BScWaciSv7ujCJj7FlV7lh+jMkXaOU/BBCDfpMP9tvnzujMo2giy29SZycN4JETR69fPNtI0Lvw+lERWZy9bUn9TenbhivKMeEpnO2aFrUyztq4DJlA4C+nvApDm+yDRVW2+Lb02doEc8159FR48usW5mGALUnHLQ2dtmLOjXJeDA6acn9Yx96cWuWHse477CbVu38lsR1sHKnI+Lz4IwK0Fj5iduGwMqeTnKM1Z5z6hF1Nert4YsETPd6A8pQ5U4jjMzYly1xiA3wAcoaM8hFpLW0UVl//SiYjcwwb23rhAH9WgliY+vxO3M+Fu0eodavzZuyAEqyd/IeDD7vEBYRqAzZTYHK6lBbHBD3I/aHg0= root@osclient

命令 ssh-keygen -q -N "" 是用于生成SSH密钥对的命令,具体参数解释如下:

  • ssh-keygen: 这是生成SSH密钥对的命令。

  • -q: 这个参数表示静默模式,即在生成密钥对的过程中不输出任何信息。

  • -N "": 这个参数后面通常跟一个字符串,用来设置密钥的密码(passphrase)。在这个命令中,-N "" 表示不设置密码,即生成一个没有密码保护的密钥对。

执行这个命令后,系统会在默认的SSH密钥存储位置(通常是 ~/.ssh 目录)生成一对新的密钥:

  • 公钥文件通常命名为 id_rsa.pub
  • 私钥文件通常命名为 id_rsa

生成的公钥可以添加到远程服务器的 ~/.ssh/authorized_keys 文件中,以便实现无密码登录。

3、将公钥上传到OpenStack

root@osclient ~(myproject/myuser)# openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
+-------------+-------------------------------------------------+
| Field       | Value                                           |
+-------------+-------------------------------------------------+
| created_at  | None                                            |
| fingerprint | 9e:63:db:4f:eb:5c:6f:e1:ef:45:e2:77:59:bd:ef:40 |
| id          | mykey                                           |
| is_deleted  | None                                            |
| name        | mykey                                           |
| type        | ssh                                             |
| user_id     | 9382b59561c04dd1abf0a4cb7a8252ec                |
+-------------+-------------------------------------------------+
root@osclient ~(myproject/myuser)# 
root@osclient ~(myproject/myuser)# openstack keypair list
+-------+-------------------------------------------------+------+
| Name  | Fingerprint                                     | Type |
+-------+-------------------------------------------------+------+
| mykey | 9e:63:db:4f:eb:5c:6f:e1:ef:45:e2:77:59:bd:ef:40 | ssh  |
+-------+-------------------------------------------------+------+

5、Add security group rules

在OpenStack中,默认的安全组会被应用到所有实例上,并且包含了一些默认的防火墙规则,这些规则通常禁止远程访问实例。对于像CirrOS这样的Linux镜像,我们建议至少允许ICMP(ping)和安全Shell(SSH)。

向默认的security group添加新的rule:

1、permit icmp

root@osclient ~(myproject/myuser)# openstack security group rule create --proto icmp default
+-------------------------+--------------------------------------+
| Field                   | Value                                |
+-------------------------+--------------------------------------+
| created_at              | 2024-09-27T13:04:13Z                 |
| description             |                                      |
| direction               | ingress                              |
| ether_type              | IPv4                                 |
| id                      | 2cc95680-49b4-4b6b-8cea-52f8cb7302aa |
| name                    | None                                 |
| port_range_max          | None                                 |
| port_range_min          | None                                 |
| project_id              | f5e75a3f7cc347ad89d20dcfe70dae01     |
| protocol                | icmp                                 |
| remote_address_group_id | None                                 |
| remote_group_id         | None                                 |
| remote_ip_prefix        | 0.0.0.0/0                            |
| revision_number         | 0                                    |
| security_group_id       | 15dfe688-d6fc-4231-a670-7b832e08fb9d |
| tags                    | []                                   |
| tenant_id               | f5e75a3f7cc347ad89d20dcfe70dae01     |
| updated_at              | 2024-09-27T13:04:13Z                 |
+-------------------------+--------------------------------------+
root@osclient ~(myproject/myuser)# 

2、Permit secure shell (SSH) access:

root@osclient ~(myproject/myuser)# openstack security group rule create --proto tcp --dst-port 22 default
+-------------------------+--------------------------------------+
| Field                   | Value                                |
+-------------------------+--------------------------------------+
| created_at              | 2024-09-27T13:07:47Z                 |
| description             |                                      |
| direction               | ingress                              |
| ether_type              | IPv4                                 |
| id                      | 6452f09e-cbce-4ff9-845e-dcfb7144f62d |
| name                    | None                                 |
| port_range_max          | 22                                   |
| port_range_min          | 22                                   |
| project_id              | f5e75a3f7cc347ad89d20dcfe70dae01     |
| protocol                | tcp                                  |
| remote_address_group_id | None                                 |
| remote_group_id         | None                                 |
| remote_ip_prefix        | 0.0.0.0/0                            |
| revision_number         | 0                                    |
| security_group_id       | 15dfe688-d6fc-4231-a670-7b832e08fb9d |
| tags                    | []                                   |
| tenant_id               | f5e75a3f7cc347ad89d20dcfe70dae01     |
| updated_at              | 2024-09-27T13:07:47Z                 |
+-------------------------+--------------------------------------+
root@osclient ~(myproject/myuser)

3、查看安全组里面的rule:

root@osclient ~(myproject/myuser)# openstack security group rule list
+--------------------------------------+-------------+-----------+-----------+------------+-----------+--------------------------------------+----------------------+--------------------------------------+
| ID                                   | IP Protocol | Ethertype | IP Range  | Port Range | Direction | Remote Security Group                | Remote Address Group | Security Group                       |
+--------------------------------------+-------------+-----------+-----------+------------+-----------+--------------------------------------+----------------------+--------------------------------------+
| 1adec9af-14a9-4288-8364-e79a8fa3b75a | None        | IPv4      | 0.0.0.0/0 |            | ingress   | 15dfe688-d6fc-4231-a670-7b832e08fb9d | None                 | 15dfe688-d6fc-4231-a670-7b832e08fb9d |
| 2cc95680-49b4-4b6b-8cea-52f8cb7302aa | icmp        | IPv4      | 0.0.0.0/0 |            | ingress   | None                                 | None                 | 15dfe688-d6fc-4231-a670-7b832e08fb9d |
| 6452f09e-cbce-4ff9-845e-dcfb7144f62d | tcp         | IPv4      | 0.0.0.0/0 | 22:22      | ingress   | None                                 | None                 | 15dfe688-d6fc-4231-a670-7b832e08fb9d |
| a5046171-f9e9-451f-acfe-662ef32ea651 | None        | IPv6      | ::/0      |            | ingress   | 15dfe688-d6fc-4231-a670-7b832e08fb9d | None                 | 15dfe688-d6fc-4231-a670-7b832e08fb9d |
| a7dffce7-946e-421e-bfa9-22fa65f4bf7a | None        | IPv4      | 0.0.0.0/0 |            | egress    | None                                 | None                 | 15dfe688-d6fc-4231-a670-7b832e08fb9d |
| d9bae044-c411-4d73-a5f4-ab422e3152a9 | None        | IPv6      | ::/0      |            | egress    | None                                 | None                 | 15dfe688-d6fc-4231-a670-7b832e08fb9d |
+--------------------------------------+-------------+-----------+-----------+------------+-----------+--------------------------------------+----------------------+--------------------------------------+
root@osclient ~(myproject/myuser)# 

6、Launch an instance

6.1 Determine instance options

root@osclient:~# source demo-openrc 
root@osclient ~(myproject/myuser)# 
root@osclient ~(myproject/myuser)# openstack flavor list
+----+---------+-----+------+-----------+-------+-----------+
| ID | Name    | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+---------+-----+------+-----------+-------+-----------+
| 0  | m1.nano |  64 |    1 |         0 |     1 | True      |
+----+---------+-----+------+-----------+-------+-----------+
root@osclient ~(myproject/myuser)# openstack image list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| 429decdd-9230-49c0-b735-70364c226eb5 | cirros | active |
+--------------------------------------+--------+--------+
root@osclient ~(myproject/myuser)# openstack network list
+--------------------------------------+----------+--------------------------------------+
| ID                                   | Name     | Subnets                              |
+--------------------------------------+----------+--------------------------------------+
| 48f2b88e-7740-4d94-a631-69e2abadf25b | provider | 8279842e-d7c5-4ba6-a037-831e0a72a938 |
+--------------------------------------+----------+--------------------------------------+
root@osclient ~(myproject/myuser)# openstack security group list
+--------------------------------------+---------+------------------------+----------------------------------+------+
| ID                                   | Name    | Description            | Project                          | Tags |
+--------------------------------------+---------+------------------------+----------------------------------+------+
| 15dfe688-d6fc-4231-a670-7b832e08fb9d | default | Default security group | f5e75a3f7cc347ad89d20dcfe70dae01 | []   |
+--------------------------------------+---------+------------------------+----------------------------------+------+
root@osclient ~(myproject/myuser)# 

6.2 Launch the instance(记录出现错误和排除过程,原文错误

root@osclient ~(myproject/myuser)# openstack server create --flavor m1.nano --image cirros \
>   --nic net-id=48f2b88e-7740-4d94-a631-69e2abadf25b --security-group default \
>   --key-name mykey provider-instance
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
<class 'keystoneauth1.exceptions.discovery.DiscoveryFailure'> (HTTP 500) (Request-ID: req-e0b4228c-3b17-4c5b-9033-b36f7793d553)
root@osclient ~(myproject/myuser)# 

启动instance时,报错。创建虚机的请求首先需发往nova-api,检查nova-api的log:

root@controller:~# tail -n 1000 /var/log/nova/nova-api.log
...
2024-09-27 13:36:36.360 1642 WARNING keystoneauth.identity.generic.base [req-a4e30ec1-d74e-413b-8a2f-07508bfe6e5f 9382b59561c04dd1abf0a4cb7a8252ec f5e75a3f7cc347ad89d20dcfe70dae01 - default default] Failed to discover available identity versions when contacting https://controller/identity. Attempting to parse version from URL.: keystoneauth1.exceptions.connection.ConnectFailure: Unable to establish connection to https://controller/identity: HTTPSConnectionPool(host='controller', port=443): Max retries exceeded with url: /identity (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fea2abebdf0>: Failed to establish a new connection: [Errno 111] ECONNREFUSED'))
2024-09-27 13:36:36.366 1642 ERROR nova.api.openstack.wsgi [req-a4e30ec1-d74e-413b-8a2f-07508bfe6e5f 9382b59561c04dd1abf0a4cb7a8252ec f5e75a3f7cc347ad89d20dcfe70dae01 - default default] Unexpected exception in API method: keystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Unable to establish connection to https://controller/identity: HTTPSConnectionPool(host='controller', port=443): Max retries exceeded with url: /identity (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fea2abebdf0>: Failed to establish a new connection: [Errno 111] ECONNREFUSED'))

错误信息表明在尝试连接到OpenStack的Identity服务(Keystone)时出现了问题。具体来说,错误信息指出无法建立到https://controller/identity的连接,并且超过了最大重试次数。这通常是由于网络问题或配置错误导致的。

检查controller node的/etc/nova/nova.conf,按下面内容修改后,重启服务:

[service_user]
send_service_user_token = true
auth_url = https://controller/identity  <--需修改为:auth_url = http://controller:5000/identity
auth_strategy = keystone
auth_type = password
project_domain_name = Default
project_name = service
user_domain_name = Default
username = nova
password = openstack 

可以创建虚机,但发现虚机的状态为ERROR:

root@osclient ~(myproject/myuser)# openstack server create --flavor m1.nano --image cirros   --nic net-id=48f2b88e-7740-4d94-a631-69e2abadf25b --security-group default   --key-name mykey provider-instance
+-----------------------------+-----------------------------------------------+
| Field                       | Value                                         |
+-----------------------------+-----------------------------------------------+
| OS-DCF:diskConfig           | MANUAL                                        |
| OS-EXT-AZ:availability_zone |                                               |
| OS-EXT-STS:power_state      | NOSTATE                                       |
| OS-EXT-STS:task_state       | scheduling                                    |
| OS-EXT-STS:vm_state         | building                                      |
| OS-SRV-USG:launched_at      | None                                          |
| OS-SRV-USG:terminated_at    | None                                          |
| accessIPv4                  |                                               |
| accessIPv6                  |                                               |
| addresses                   |                                               |
| adminPass                   | ZLvm7CtGav5B                                  |
| config_drive                |                                               |
| created                     | 2024-09-27T13:51:22Z                          |
| flavor                      | m1.nano (0)                                   |
| hostId                      |                                               |
| id                          | 23bab8ab-5ce5-461b-9f9b-b5bfcff45529          |
| image                       | cirros (429decdd-9230-49c0-b735-70364c226eb5) |
| key_name                    | mykey                                         |
| name                        | provider-instance                             |
| progress                    | 0                                             |
| project_id                  | f5e75a3f7cc347ad89d20dcfe70dae01              |
| properties                  |                                               |
| security_groups             | name='15dfe688-d6fc-4231-a670-7b832e08fb9d'   |
| status                      | BUILD                                         |
| updated                     | 2024-09-27T13:51:22Z                          |
| user_id                     | 9382b59561c04dd1abf0a4cb7a8252ec              |
| volumes_attached            |                                               |
+-----------------------------+-----------------------------------------------+root@osclient ~(myproject/myuser)# openstack server list
+--------------------------------------+-------------------+--------+----------+--------+---------+
| ID                                   | Name              | Status | Networks | Image  | Flavor  |
+--------------------------------------+-------------------+--------+----------+--------+---------+
| 23bab8ab-5ce5-461b-9f9b-b5bfcff45529 | provider-instance | ERROR  |          | cirros | m1.nano |
+--------------------------------------+-------------------+--------+----------+--------+---------+
root@osclient ~(myproject/myuser)# 

查看日志:root@compute1:~# tail -n 2000 /var/log/nova/nova-compute.log出现类似错误,修改compute1的/etc/nova/nova.conf的相同部分的信息,并重启服务:

root@compute1:~# vi /etc/nova/nova.conf 
...
[service_user]
send_service_user_token = true
auth_url = http://controller:5000/identity  <---修改后的内容
auth_strategy = keystone
auth_type = password
project_domain_name = Default
project_name = service
user_domain_name = Default
username = nova
password = openstack
...
root@compute1:~# service nova-compute restart
root@compute1:~#

删除之前建立的虚机:

root@osclient ~(myproject/myuser)# openstack server list
+--------------------------------------+-------------------+--------+----------+--------+---------+
| ID                                   | Name              | Status | Networks | Image  | Flavor  |
+--------------------------------------+-------------------+--------+----------+--------+---------+
| 23bab8ab-5ce5-461b-9f9b-b5bfcff45529 | provider-instance | ERROR  |          | cirros | m1.nano |
+--------------------------------------+-------------------+--------+----------+--------+---------+
root@osclient ~(myproject/myuser)# openstack server delete  23bab8ab-5ce5-461b-9f9b-b5bfcff45529

重新执行创建虚机的命令,虚机正常建立:

root@osclient ~(myproject/myuser)# openstack server create --flavor m1.nano --image cirros   --nic net-id=48f2b88e-7740-4d94-a631-69e2abadf25b --security-group default   --key-name mykey provider-instance
+-----------------------------+-----------------------------------------------+
| Field                       | Value                                         |
+-----------------------------+-----------------------------------------------+
| OS-DCF:diskConfig           | MANUAL                                        |
| OS-EXT-AZ:availability_zone |                                               |
| OS-EXT-STS:power_state      | NOSTATE                                       |
| OS-EXT-STS:task_state       | scheduling                                    |
| OS-EXT-STS:vm_state         | building                                      |
| OS-SRV-USG:launched_at      | None                                          |
| OS-SRV-USG:terminated_at    | None                                          |
| accessIPv4                  |                                               |
| accessIPv6                  |                                               |
| addresses                   |                                               |
| adminPass                   | Fkcpj47EGcxG                                  |
| config_drive                |                                               |
| created                     | 2024-09-27T14:09:37Z                          |
| flavor                      | m1.nano (0)                                   |
| hostId                      |                                               |
| id                          | 4e2e96de-b9be-4da8-925c-e3048d8a3b44          |
| image                       | cirros (429decdd-9230-49c0-b735-70364c226eb5) |
| key_name                    | mykey                                         |
| name                        | provider-instance                             |
| progress                    | 0                                             |
| project_id                  | f5e75a3f7cc347ad89d20dcfe70dae01              |
| properties                  |                                               |
| security_groups             | name='15dfe688-d6fc-4231-a670-7b832e08fb9d'   |
| status                      | BUILD                                         |
| updated                     | 2024-09-27T14:09:37Z                          |
| user_id                     | 9382b59561c04dd1abf0a4cb7a8252ec              |
| volumes_attached            |                                               |
+-----------------------------+-----------------------------------------------+
root@osclient ~(myproject/myuser)# openstack server list
+--------------------------------------+-------------------+--------+------------------------+--------+---------+
| ID                                   | Name              | Status | Networks               | Image  | Flavor  |
+--------------------------------------+-------------------+--------+------------------------+--------+---------+
| 4e2e96de-b9be-4da8-925c-e3048d8a3b44 | provider-instance | ACTIVE | provider=203.0.113.155 | cirros | m1.nano |
+--------------------------------------+-------------------+--------+------------------------+--------+---------+
root@osclient ~(myproject/myuser)# 
root@osclient ~(myproject/myuser)# openstack server show 4e2e96de-b9be-4da8-925c-e3048d8a3b44
+-----------------------------+----------------------------------------------------------+
| Field                       | Value                                                    |
+-----------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig           | MANUAL                                                   |
| OS-EXT-AZ:availability_zone | nova                                                     |
| OS-EXT-STS:power_state      | Running                                                  |
| OS-EXT-STS:task_state       | None                                                     |
| OS-EXT-STS:vm_state         | active                                                   |
| OS-SRV-USG:launched_at      | 2024-09-27T14:09:17.000000                               |
| OS-SRV-USG:terminated_at    | None                                                     |
| accessIPv4                  |                                                          |
| accessIPv6                  |                                                          |
| addresses                   | provider=203.0.113.155                                   |
| config_drive                |                                                          |
| created                     | 2024-09-27T14:09:37Z                                     |
| flavor                      | m1.nano (0)                                              |
| hostId                      | 892d1a79d804f6b0fbfb68938ec0df8a0abc8e3d52660529538123e4 |
| id                          | 4e2e96de-b9be-4da8-925c-e3048d8a3b44                     |
| image                       | cirros (429decdd-9230-49c0-b735-70364c226eb5)            |
| key_name                    | mykey                                                    |
| name                        | provider-instance                                        |
| progress                    | 0                                                        |
| project_id                  | f5e75a3f7cc347ad89d20dcfe70dae01                         |
| properties                  |                                                          |
| security_groups             | name='default'                                           |
| status                      | ACTIVE                                                   |
| updated                     | 2024-09-27T22:23:25Z                                     |
| user_id                     | 9382b59561c04dd1abf0a4cb7a8252ec                         |
| volumes_attached            |                                                          |
+-----------------------------+----------------------------------------------------------+
root@osclient ~(myproject/myuser)# 

6.3 检查hypervisor时,发现问题(人为配置错误)

root@controller ~(admin/amdin)# nova hypervisor-list
+--------------------------------------+---------------------+-------+---------+
| ID                                   | Hypervisor hostname | State | Status  |
+--------------------------------------+---------------------+-------+---------+
| 205c89e0-fb82-4def-a0f6-bfe4b120ab79 | compute1            | up    | enabled |
| 027eb56f-a860-41b8-afa3-91b65f1c8777 | controller          | up    | enabled |
+--------------------------------------+---------------------+-------+---------+
root@controller ~(admin/amdin)# nova hypervisor-show 205c89e0-fb82-4def-a0f6-bfe4b120ab79
+-------------------------+--------------------------------------+
| Property                | Value                                |
+-------------------------+--------------------------------------+
| host_ip                 | 10.0.20.11                           |
| hypervisor_hostname     | compute1                             |
| hypervisor_type         | QEMU                                 |
| hypervisor_version      | 6002000                              |
| id                      | 205c89e0-fb82-4def-a0f6-bfe4b120ab79 |
| service_disabled_reason | None                                 |
| service_host            | compute1                             |
| service_id              | c04e53a4-fdb8-4915-9b1a-f5d195e753c4 |
| state                   | up                                   |
| status                  | enabled                              |
| uptime                  |  23:22:06 up  1:04,  1 user,  load   |
|                         | average: 0.19, 0.24, 0.25            |
+-------------------------+--------------------------------------+
root@controller ~(admin/amdin)# nova hypervisor-show 027eb56f-a860-41b8-afa3-91b65f1c8777
+-------------------------+--------------------------------------+
| Property                | Value                                |
+-------------------------+--------------------------------------+
| host_ip                 | 10.0.20.11                           |
| hypervisor_hostname     | controller                           |
| hypervisor_type         | QEMU                                 |
| hypervisor_version      | 6002000                              |
| id                      | 027eb56f-a860-41b8-afa3-91b65f1c8777 |
| service_disabled_reason | None                                 |
| service_host            | controller                           |
| service_id              | b3d4e71d-088a-4249-8d8f-e6d8528c698d |
| state                   | up                                   |
| status                  | enabled                              |
| uptime                  |  23:22:46 up  1:05,  1 user,  load   |
|                         | average: 0.18, 0.18, 0.17            |
+-------------------------+--------------------------------------+

两个hypervisor的host_ip都是10.0.20.11,这是有问题的。compute1应该是10.0.20.12,检查compute1的/etc/nova/nova.conf发现配置错误:

[DEFAULT]
log_dir = /var/log/nova
lock_path = /var/lock/nova
state_path = /var/lib/nova
transport_url = rabbit://openstack:openstack@controller
my_ip = 10.0.20.11 <----应该是10.0.20.12!!!

修改后,reboot compute1:

root@compute1 ~(admin/amdin)# vi /etc/nova/nova.conf
root@compute1 ~(admin/amdin)# reboot

为稳妥起见,reboot controller:

root@controller ~(admin/amdin)# reboot

再次检查,hypervisor显示正常:

root@controller ~(admin/amdin)# nova hypervisor-list
+--------------------------------------+---------------------+-------+---------+
| ID                                   | Hypervisor hostname | State | Status  |
+--------------------------------------+---------------------+-------+---------+
| 205c89e0-fb82-4def-a0f6-bfe4b120ab79 | compute1            | up    | enabled |
| 027eb56f-a860-41b8-afa3-91b65f1c8777 | controller          | up    | enabled |
+--------------------------------------+---------------------+-------+---------+
root@controller ~(admin/amdin)# nova hypervisor-show 205c89e0-fb82-4def-a0f6-bfe4b120ab79
+-------------------------+--------------------------------------+
| Property                | Value                                |
+-------------------------+--------------------------------------+
| host_ip                 | 10.0.20.12                           |
| hypervisor_hostname     | compute1                             |
| hypervisor_type         | QEMU                                 |
| hypervisor_version      | 6002000                              |
| id                      | 205c89e0-fb82-4def-a0f6-bfe4b120ab79 |
| service_disabled_reason | None                                 |
| service_host            | compute1                             |
| service_id              | c04e53a4-fdb8-4915-9b1a-f5d195e753c4 |
| state                   | up                                   |
| status                  | enabled                              |
| uptime                  |  00:39:54 up 4 min,  1 user,  load   |
|                         | average: 0.08, 0.16, 0.08            |
+-------------------------+--------------------------------------+
root@controller ~(admin/amdin)# nova hypervisor-show 027eb56f-a860-41b8-afa3-91b65f1c8777
+-------------------------+--------------------------------------+
| Property                | Value                                |
+-------------------------+--------------------------------------+
| host_ip                 | 10.0.20.11                           |
| hypervisor_hostname     | controller                           |
| hypervisor_type         | QEMU                                 |
| hypervisor_version      | 6002000                              |
| id                      | 027eb56f-a860-41b8-afa3-91b65f1c8777 |
| service_disabled_reason | None                                 |
| service_host            | controller                           |
| service_id              | b3d4e71d-088a-4249-8d8f-e6d8528c698d |
| state                   | up                                   |
| status                  | enabled                              |
| uptime                  |  00:35:22 up 2 min,  1 user,  load   |
|                         | average: 0.76, 0.52, 0.21            |
+-------------------------+--------------------------------------+
root@controller ~(admin/amdin)#

 6.4 查看instance具体运行在哪个hypervisor上

执行operstack server start重新启动instance:

root@osclient ~(myproject/myuser)# openstack server list
+--------------------------------------+-------------------+---------+------------------------+--------+---------+
| ID                                   | Name              | Status  | Networks               | Image  | Flavor  |
+--------------------------------------+-------------------+---------+------------------------+--------+---------+
| 4e2e96de-b9be-4da8-925c-e3048d8a3b44 | provider-instance | SHUTOFF | provider=203.0.113.155 | cirros | m1.nano |
+--------------------------------------+-------------------+---------+------------------------+--------+---------+
root@osclient ~(myproject/myuser)# openstack server start provider-instance
root@osclient ~(myproject/myuser)# openstack server list
+--------------------------------------+-------------------+--------+------------------------+--------+---------+
| ID                                   | Name              | Status | Networks               | Image  | Flavor  |
+--------------------------------------+-------------------+--------+------------------------+--------+---------+
| 4e2e96de-b9be-4da8-925c-e3048d8a3b44 | provider-instance | ACTIVE | provider=203.0.113.155 | cirros | m1.nano |
+--------------------------------------+-------------------+--------+------------------------+--------+---------+
root@osclient ~(myproject/myuser)# 

 在controller先查看hypervisor list,在查看具体hypervisor的虚机运行情况:

root@controller ~(admin/amdin)# nova hypervisor-list
+--------------------------------------+---------------------+-------+---------+
| ID                                   | Hypervisor hostname | State | Status  |
+--------------------------------------+---------------------+-------+---------+
| 205c89e0-fb82-4def-a0f6-bfe4b120ab79 | compute1            | up    | enabled |
| 027eb56f-a860-41b8-afa3-91b65f1c8777 | controller          | up    | enabled |
+--------------------------------------+---------------------+-------+---------+

查看controller是否有虚机,显示没有:

root@controller ~(admin/amdin)# nova hypervisor-servers controller
+----+------+---------------+---------------------+
| ID | Name | Hypervisor ID | Hypervisor Hostname |
+----+------+---------------+---------------------+
+----+------+---------------+---------------------+

 查看compute1是否有虚机,显示有1个虚机:

root@controller ~(admin/amdin)# nova hypervisor-servers compute1
+--------------------------------------+-------------------+--------------------------------------+---------------------+
| ID                                   | Name              | Hypervisor ID                        | Hypervisor Hostname |
+--------------------------------------+-------------------+--------------------------------------+---------------------+
| 4e2e96de-b9be-4da8-925c-e3048d8a3b44 | instance-00000003 | 205c89e0-fb82-4def-a0f6-bfe4b120ab79 | compute1            |
+--------------------------------------+-------------------+--------------------------------------+---------------------+

 也可以通过virsh命令进行查看:

root@compute1:~# virsh listId   Name                State
-----------------------------------1    instance-00000003   runningroot@compute1:~# virsh dominfo instance-00000003
Id:             1
Name:           instance-00000003
UUID:           4e2e96de-b9be-4da8-925c-e3048d8a3b44
OS Type:        hvm
State:          running
CPU(s):         1
CPU time:       40.7s
Max memory:     65536 KiB
Used memory:    65536 KiB
Persistent:     yes
Autostart:      disable
Managed save:   no
Security model: apparmor
Security DOI:   0
Security label: libvirt-4e2e96de-b9be-4da8-925c-e3048d8a3b44 (enforcing)root@compute1:~# 

 7、Access the instance using the virtual console

1、获取虚拟机实例的控制台URL

root@osclient ~(myproject/myuser)# openstack console url show provider-instance
+----------+-------------------------------------------------------------------------------------------+
| Field    | Value                                                                                     |
+----------+-------------------------------------------------------------------------------------------+
| protocol | vnc                                                                                       |
| type     | novnc                                                                                     |
| url      | http://controller:6080/vnc_auto.html?path=%3Ftoken%3D1674eeed-8a9d-4c2e-aef3-1f26ddf2f2b6 |
+----------+-------------------------------------------------------------------------------------------+

2、通过10.0.20.1 pc的浏览器进行访问:

没有显示终端字幕,原因待查明。

查看日志文件:

root@controller ~(admin/amdin)# tail -n 100 /var/log/nova/nova-novncproxy.log
...
2024-09-28 01:36:39.857 3537 INFO nova.console.websocketproxy [-] 10.0.20.1 - - [28/Sep/2024 01:36:39] 10.0.20.1: Plain non-SSL (ws://) WebSocket connection
2024-09-28 01:36:39.858 3537 INFO nova.console.websocketproxy [-] 10.0.20.1 - - [28/Sep/2024 01:36:39] 10.0.20.1: Path: '/?token=1674eeed-8a9d-4c2e-aef3-1f26ddf2f2b6'
2024-09-28 01:36:40.031 3537 INFO nova.console.websocketproxy [req-0434a76f-5004-4af4-b9ac-697267eb3ede - - - - -]   6: connect info: ConsoleAuthToken(access_url_base='http://controller:6080/vnc_auto.html',console_type='novnc',created_at=2024-09-28T01:34:12Z,host='10.0.20.12',id=6,instance_uuid=4e2e96de-b9be-4da8-925c-e3048d8a3b44,internal_access_path=None,port=5900,token='***',updated_at=None)
2024-09-28 01:36:40.032 3537 INFO nova.console.websocketproxy [req-0434a76f-5004-4af4-b9ac-697267eb3ede - - - - -]   6: connecting to: 10.0.20.12:5900
2024-09-28 01:36:40.038 3537 INFO nova.console.securityproxy.rfb [req-0434a76f-5004-4af4-b9ac-697267eb3ede - - - - -] Finished security handshake, resuming normal proxy mode using secured socket
root@controller ~(admin/amdin)# 

从日志信息中,我们可以看到以下关键点:

  1. WebSocket连接

    • 一个来自IP地址10.0.20.1的客户端尝试通过非SSL的WebSocket (ws://) 连接到Nova控制台代理。
  2. 连接路径

    • 连接请求包含了一个token,用于验证连接的合法性。
  3. 控制台认证信息

    • 连接尝试被Nova控制台代理服务接收,并提供了控制台认证信息,包括访问URL、控制台类型(NoVNC)、创建时间、主机地址、实例UUID、端口和token。
  4. 连接到VNC服务器

    • 代理服务尝试将连接转发到虚拟机实例的VNC服务器,地址为10.0.20.12,端口为5900
  5. 安全握手

    • 安全握手完成,代理服务切换到使用安全套接字的正常代理模式。

 8、重新创建虚机

在从qdhcp ping “provider-instance"虚机过程中,ping不通。发现compute1的ens35没有正确配置。

执行以下命令:

root@compute1:~# vi /etc/netplan/00-installer-config.yaml 
root@compute1:~# netplan apply
root@compute1:~# cat /etc/netplan/00-installer-config.yaml 
# This is the network config written by 'subiquity'
network:ethernets:ens32:addresses:- 10.0.20.12/24nameservers:addresses:- 10.0.20.2search: []routes:- to: defaultvia: 10.0.20.2ens35:dhcp4: falseversion: 2
root@compute1:~# netplan apply

为保证没其他问题,重新创建了虚机:

root@osclient ~(myproject/myuser)# openstack server list
+--------------------------------------+-------------------+--------+------------------------+--------+---------+
| ID                                   | Name              | Status | Networks               | Image  | Flavor  |
+--------------------------------------+-------------------+--------+------------------------+--------+---------+
| 4e2e96de-b9be-4da8-925c-e3048d8a3b44 | provider-instance | ACTIVE | provider=203.0.113.155 | cirros | m1.nano |
+--------------------------------------+-------------------+--------+------------------------+--------+---------+
root@osclient ~(myproject/myuser)# openstack server delete 4e2e96de-b9be-4da8-925c-e3048d8a3b44\
> 
root@osclient ~(myproject/myuser)# openstack server listroot@osclient ~(myproject/myuser)# openstack network list
+--------------------------------------+----------+--------------------------------------+
| ID                                   | Name     | Subnets                              |
+--------------------------------------+----------+--------------------------------------+
| 48f2b88e-7740-4d94-a631-69e2abadf25b | provider | 8279842e-d7c5-4ba6-a037-831e0a72a938 |
+--------------------------------------+----------+--------------------------------------+
root@osclient ~(myproject/myuser)# openstack server create --flavor m1.nano --image cirros \
>   --nic net-id=48f2b88e-7740-4d94-a631-69e2abadf25b --security-group default \
>   --key-name mykey provider-instance
+-----------------------------+-----------------------------------------------+
| Field                       | Value                                         |
+-----------------------------+-----------------------------------------------+
| OS-DCF:diskConfig           | MANUAL                                        |
| OS-EXT-AZ:availability_zone |                                               |
| OS-EXT-STS:power_state      | NOSTATE                                       |
| OS-EXT-STS:task_state       | scheduling                                    |
| OS-EXT-STS:vm_state         | building                                      |
| OS-SRV-USG:launched_at      | None                                          |
| OS-SRV-USG:terminated_at    | None                                          |
| accessIPv4                  |                                               |
| accessIPv6                  |                                               |
| addresses                   |                                               |
| adminPass                   | ee9VbWSvbbG8                                  |
| config_drive                |                                               |
| created                     | 2024-09-28T02:49:20Z                          |
| flavor                      | m1.nano (0)                                   |
| hostId                      |                                               |
| id                          | d2e4bc39-63c8-4c80-b33f-52f4e1891f50          |
| image                       | cirros (429decdd-9230-49c0-b735-70364c226eb5) |
| key_name                    | mykey                                         |
| name                        | provider-instance                             |
| progress                    | 0                                             |
| project_id                  | f5e75a3f7cc347ad89d20dcfe70dae01              |
| properties                  |                                               |
| security_groups             | name='15dfe688-d6fc-4231-a670-7b832e08fb9d'   |
| status                      | BUILD                                         |
| updated                     | 2024-09-28T02:49:20Z                          |
| user_id                     | 9382b59561c04dd1abf0a4cb7a8252ec              |
| volumes_attached            |                                               |
+-----------------------------+-----------------------------------------------+

9、创建虚机后的网络拓扑 

创建虚机后,Openstack视角的抽象网络拓扑:

Openstack视图的网络拓扑

 实际的网络拓扑,其中openstack创建了qdhcpxxxx、brqxxxx、provider-instance:

实际网络拓扑示意

10、SSH登录虚机

 1、qdhcp 可以ping通虚机

root@controller ~(admin/amdin)# ip netns exec qdhcp-48f2b88e-7740-4d94-a631-69e2abadf25b ping 203.0.113.125
PING 203.0.113.125 (203.0.113.125) 56(84) bytes of data.
64 bytes from 203.0.113.125: icmp_seq=1 ttl=64 time=2.23 ms
64 bytes from 203.0.113.125: icmp_seq=2 ttl=64 time=0.786 ms
^C
--- 203.0.113.125 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.786/1.506/2.227/0.720 ms
root@controller ~(admin/amdin)# ip netns exec qdhcp-48f2b88e-7740-4d94-a631-69e2abadf25b ping 203.0.113.90
PING 203.0.113.90 (203.0.113.90) 56(84) bytes of data.
64 bytes from 203.0.113.90: icmp_seq=1 ttl=64 time=0.233 ms
64 bytes from 203.0.113.90: icmp_seq=2 ttl=64 time=0.323 ms
64 bytes from 203.0.113.90: icmp_seq=3 ttl=64 time=0.300 ms
^C
--- 203.0.113.90 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.233/0.285/0.323/0.038 ms
root@controller ~(admin/amdin)# 

2、win11能ping 虚机

C:\>ipconfigWindows IP 配置
...
以太网适配器 VMware Network Adapter VMnet6:连接特定的 DNS 后缀 . . . . . . . :本地链接 IPv6 地址. . . . . . . . : fe80::f73a:9:c195:8516%30IPv4 地址 . . . . . . . . . . . . : 203.0.113.90子网掩码  . . . . . . . . . . . . : 255.255.255.0默认网关. . . . . . . . . . . . . :C:\>ping 203.0.113.125正在 Ping 203.0.113.125 具有 32 字节的数据:
来自 203.0.113.1 的回复: 无法访问目标主机。
来自 203.0.113.125 的回复: 字节=32 时间=3ms TTL=64
来自 203.0.113.125 的回复: 字节=32 时间<1ms TTL=64
来自 203.0.113.125 的回复: 字节=32 时间<1ms TTL=64203.0.113.125 的 Ping 统计信息:数据包: 已发送 = 4,已接收 = 4,丢失 = 0 (0% 丢失),
往返行程的估计时间(以毫秒为单位):最短 = 0ms,最长 = 3ms,平均 = 1msC:\>

3、win11 ssh 登录虚机provider-instance

使用SecureCRT:

username/password: cirros/gocubsgo:

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000link/ether fa:16:3e:60:78:cd brd ff:ff:ff:ff:ff:ffinet 203.0.113.125/24 brd 203.0.113.255 scope global eth0valid_lft forever preferred_lft foreverinet6 fe80::f816:3eff:fe60:78cd/64 scope link valid_lft forever preferred_lft forever
$ 

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/438111.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

OpenCV计算机视觉库

计算机视觉和图像处理 Tensorflow入门深度神经网络图像分类目标检测图像分割OpenCVPytorchNLP自然语言处理 OpenCV 一、OpenCV简介1.1 简介1.2 OpenCV部署1.3 OpenCV模块 二、OpenCV基本操作2.1 图像的基本操作2.1.1 图像的IO操作2.1.2 绘制几何图像2.1.3 获取并修改图像的像素…

无人机电力巡检:点亮电力巡检新视野!

一、无人机电力巡查的优势 提高巡检效率&#xff1a;无人机可以搭载高清摄像头、红外热像仪等先进设备&#xff0c;实时拍摄和传输图像&#xff0c;帮助巡检人员快速发现潜在问题&#xff0c;如电线破损、绝缘子污损、设备过热等&#xff0c;从而大大缩短了巡检周期。 降低人…

python-斐波那契词序列/最大回文乘积/求最大最小k个元素

一:斐波那契词序列题目描述 编写一个程序&#xff0c;生成斐波那契词序列的前n个元素。 斐波那契词序列是一个词序列&#xff0c;其中每个词是通过连接前两个词形成的。 它以斐波那契序列命名&#xff0c;因为它是以类似的方式创建的&#xff0c;但是我们不是加数字&#xff0c…

《OpenCV》—— 指纹验证

用两张指纹图片中的其中一张对其验证 完整代码 import cv2def cv_show(name, img):cv2.imshow(name, img)cv2.waitKey(0)def verification(src, model):sift cv2.SIFT_create()kp1, des1 sift.detectAndCompute(src, None)kp2, des2 sift.detectAndCompute(model, None)fl…

以太网交换安全:MAC地址表安全

一、MAC地址表安全 MAC地址表安全是网络安全中的一个重要方面&#xff0c;它涉及到网络设备的MAC地址表的管理和保护。以下是对MAC地址表安全的详细介绍&#xff1a; &#xff08;1&#xff09;基本概念 定义&#xff1a;MAC地址表是网络设备&#xff08;如交换机&#xff0…

【Linux进程间通信】Linux匿名管道详解:构建进程间通信的隐形桥梁

&#x1f4dd;个人主页&#x1f339;&#xff1a;Eternity._ ⏩收录专栏⏪&#xff1a;Linux “ 登神长阶 ” &#x1f339;&#x1f339;期待您的关注 &#x1f339;&#x1f339; ❀Linux进程间通信 &#x1f4d2;1. 进程间通信介绍&#x1f4da;2. 什么是管道&#x1f4dc;3…

unity 默认渲染管线材质球的材质通道,材质球的材质通道

标准渲染管线——材质球的材质通道 文档&#xff0c;与内容无关&#xff0c;是介绍材质球的属性的。 https://docs.unity3d.com/2022.1/Documentation/Manual/StandardShaderMaterialParameters.html游戏资源中常见的贴图类型 https://zhuanlan.zhihu.com/p/260973533 十大贴图…

最新版ChatGPT对话系统源码 Chat Nio系统源码

介绍&#xff1a; 最新版ChatGPT对话系统源码 Chat Nio系统源码 支持 Vision 模型, 同时支持 直接上传图片 和 输入图片直链或 Base64 图片 功能 (如 GPT-4 Vision Preview, Gemini Pro Vision 等模型) 支持 DALL-E 模型绘图 支持 Midjourney / Niji 模型的 Imagine / Upsc…

OpenSource - 开源WAF_SamWaf

文章目录 PreSafeLine VS SamWaf开发初衷软件介绍架构界面主要功能 使用说明下载最新版本快速启动WindowsLinuxDocker 启动访问升级指南自动升级手动升级 在线文档 代码相关代码托管介绍和编译已测试支持的平台测试效果 安全策略问题反馈许可证书贡献代码 Pre Nginx - 集成Mod…

单调队列应用介绍

单调队列应用介绍 定义应用场景实现模板具体示例滑动窗口最大值问题描述问题分析代码实现带限制的子序列和问题描述问题分析代码实现跳跃游戏问题描述问题分析代码实现定义 队列(Queue)是另一种操作受限的线性表,只允许元素从队列的一端进,另一端出,具有先进先出(FIFO)的特…

关于HTML 案例_个人简历展示01

案例效果展示 代码 <!DOCTYPE html> <lang"en"> <head><meta charset"UTF-8"><meta name"viewport" content"widthdevice-width, initial-scale1.0"><title>个人简历信息</title> </he…

MySQL 中的 LAST_INSERT_ID()函数详解

在 MySQL 数据库中&#xff0c;LAST_INSERT_ID()是一个非常有用的函数。它可以帮助我们获取最近一次插入操作所生成的自增 ID 值。本文将详细解释 MySQL 中的LAST_INSERT_ID()函数及其用途。 一、函数介绍 LAST_INSERT_ID()是 MySQL 中的一个内置函数&#xff0c;它返回最近一…

通过栈实现字符串中查找是否有指定字符串的存在

题目示例&#xff1a; 分析 由与没有给出字符串的长度&#xff0c;所以只能通过getline一次性处理&#xff0c;而在输入后恰好能倒序处理字符串&#xff0c;以标点符号为分界点&#xff0c;将数字当成字符放到栈里&#xff0c;遇到下一个标点符号时执行查找操作&#xff0c;…

关于Mybatis框架操作时注意的细节,常见的错误!(博主亲生体会的细节!)

目录 1.在对DB进行CRUD时&#xff0c;除了查&#xff0c;其余的操作都要进行事务的提交否则不成功。 2.用sqlSession原生方法时&#xff0c;第一个参数方法名&#xff0c;是xml文件中定义的id名&#xff0c;底层找的是你这个接口所定义的方法名。 3.以包为单位引入映射文件 …

Vue项目开发注意事项

事项一&#xff1a;项目代码放在本地怎么运行起来 1、首先确定项目对应的node和npm版本 node下载地址 Index of /dist/https://nodejs.org/dist/ node 与 npm版本对应关系 Node.js — Node.js Releases 2、node卸载的时候&#xff0c;会自动把对应的npm卸载掉 情况1&…

无环SLAM系统集成后端回环检测模块(loop):SC-A-LOAM以及FAST_LIO_SLAM

最近在研究SLAM目标检测相关知识&#xff0c;看到一篇论文&#xff0c;集成了SC-A-LOAM作为后端回环检测模块&#xff0c;在学习了论文相关内容后决定看一下代码知识&#xff0c;随后将其移植&#xff0c;学习过程中发现我找的论文已经集成了回环检测模块&#xff0c;但是我的另…

【重学 MySQL】四十九、阿里 MySQL 命名规范及 MySQL8 DDL 的原子化

【重学 MySQL】四十九、阿里 MySQL 命名规范及 MySQL8 DDL 的原子化 阿里 MySQL 命名规范MySQL8 DDL的原子化 阿里 MySQL 命名规范 【强制】表名、字段名必须使用小写字母或数字&#xff0c;禁止出现数字开头&#xff0c;禁止两个下划线中间只出现数字。数据库字段名的修改代价…

Java 计算器项目

更多有趣请关注公众号 计算器项目 代码仓库&#xff1a;https://gitee.com/wengxiulin/vs_code 项目图片 项目简介 这是一个用 Java 编写的简单计算器应用程序&#xff0c;具有基本的数学运算功能。该计算器支持加、减、乘、除等运算&#xff0c;并提供用户友好的图形界面…

【STM32】TCP/IP通信协议(2)--LwIP内存管理

五、LWIP内存管理 1.什么是内存管理&#xff1f; &#xff08;1&#xff09;内存管理&#xff0c;是指软件运行时对计算机内存资源的分配的使用的技术&#xff0c;其主要目的是如何高效、快速的分配&#xff0c;并且在适当的时候释放和回收内存资源&#xff08;就比如C语言当…

使用微服务Spring Cloud集成Kafka实现异步通信(消费者)

1、本文架构 本文目标是使用微服务Spring Cloud集成Kafka实现异步通信。其中Kafka Server部署在Ubuntu虚拟机上&#xff0c;微服务部署在Windows 11系统上&#xff0c;Kafka Producer微服务和Kafka Consumer微服务分别注册到Eureka注册中心。Kafka Producer和Kafka Consumer之…