存储+调优:存储-IP-SAN-EXTENSION

存储+调优:存储-IP-SAN-EXTENSION

文件系统的锁标记
GFS(锁表空间)

        -----------        ------------        -------------
节点        | ndoe1  |             | node2   |        |  node3    |
        ----------               ------------              -------------
              \                     /                     /
               \                  /                    /
                 \              /                     /
                    交换机-----------------------------
                   /         \                        \
                 /            \                        \
               /               \                         \
         ---------          ----------                 ------------
存储        | node4 |           |  node5 |                 |  node6  |
        ----------          ----------                  -----------

准备工作

IP:    node1     172.16.1.1/24
    node2    172.16.1.2/24
    node3    172.16.1.3/24
    node4    172.16.1.4/24
    node5    172.16.1.5/24
    node6    172.16.1.6/24

hostname
/etc/hosts
iptables 
selinux
yum

1.配置node1,node2集群节点

    安装集群相关软件包
[root@node1 ~]# yum install cman openais
[root@node1 ~]# yum install system-config-cluster

    使用system-config-cluster配置集群
[root@node1 ~]# cat /etc/cluster/cluster.conf 
<?xml version="1.0" ?>
<cluster config_version="2" name="iscsi_cluster">
        <fence_daemon post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="node1.uplooking.com" nodeid="1" votes="1">
                        <fence/>
                </clusternode>
                <clusternode name="node2.uplooking.com" nodeid="2" votes="1">
                        <fence/>
                </clusternode>
        </clusternodes>
        <cman expected_votes="1" two_node="1"/>
        <fencedevices/>
        <rm>
                <failoverdomains/>
                <resources/>
        </rm>
</cluster>
[root@node1 ~]# scp /etc/cluster/cluster.conf node2:/etc/cluster/
[root@node1 ~]# service cman start
Starting cluster: 
   Loading modules... done
   Mounting configfs... done
   Starting ccsd... done
   Starting cman... done
   Starting daemons... done
   Starting fencing... done
                                                           [  OK  ]

[root@node1 ~]# cman_tool status
Version: 6.2.0
Config Version: 2
Cluster Name: iscsi_cluster
Cluster Id: 26292
Cluster Member: Yes
Cluster Generation: 8
Membership state: Cluster-Member
Nodes: 2
Expected votes: 1
Total votes: 2
Quorum: 1  
Active subsystems: 7
Flags: 2node Dirty 
Ports Bound: 0  
Node name: node1.uplooking.com
Node ID: 1
Multicast addresses: 239.192.102.27 
Node addresses: 172.16.1.1 

2.配置node4,node5存储节点
[root@node4 ~]# mkdir /iscsi
[root@node4 ~]# dd if=/dev/zero of=/iscsi/disk-node4 bs=1M count=500
[root@node4 ~]# yum install scsi-target-utils

[root@node4 ~]# vim /etc/tgt/targets.conf 

default-driver iscsi


# Continue if tgtadm exits with non-zero code (equivalent of
# --ignore-errors command line option)
#ignore-errors yes


# Sample target with one LUN only. Defaults to allow access for all initiators:

<target iqn.2012-02.com.uplooking:node4.target1>
    backing-store /iscsi/disk-node4
    write-cache off
    vendor_id node4
    product_id storage4
    initiator-address 172.16.1.1
    initiator-address 172.16.1.2
</target>
 
[root@node4 ~]# service tgtd start
Starting SCSI target daemon: Starting target framework daemon

[root@node4 ~]# tgt-admin --show
Target 1: iqn.2012-02.com.uplooking:node4.target1
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: IET     00010000
            SCSI SN: beaf10
            Size: 0 MB
            Online: Yes
            Removable media: No
            Backing store type: rdwr
            Backing store path: None
        LUN: 1
            Type: disk
            SCSI ID: IET     00010001
            SCSI SN: beaf11
            Size: 524 MB
            Online: Yes
            Removable media: No
            Backing store type: rdwr
            Backing store path: /iscsi/disk-node4
    Account information:
    ACL information:
        172.16.1.1
        172.16.1.2
         

3.集群节点node1,node2发现并登陆node4,node5存储

[root@node1 ~]# yum install iscsi-initiator-utils
[root@node1 ~]# iscsiadm -m discovery -t sendtargets -p 172.16.1.4:3260
iscsiadm: can not connect to iSCSI daemon (111)!
iscsiadm: Could not scan /sys/class/iscsi_transport.
iscsiadm: Could not scan /sys/class/iscsi_transport.
iscsiadm: can not connect to iSCSI daemon (111)!
iscsiadm: Cannot perform discovery. Initiatorname required.
iscsiadm: Discovery process to 172.16.1.4:3260 failed to create a discovery session.
iscsiadm: Could not perform SendTargets discovery.
[root@node1 ~]# service iscsi start
iscsid is stopped
Starting iSCSI daemon:                                     [  OK  ]
                                                           [  OK  ]
Setting up iSCSI targets: iscsiadm: No records found!
                                                           [  OK  ]
[root@node1 ~]# iscsiadm -m discovery -t sendtargets -p 172.16.1.4:3260
172.16.1.4:3260,1 iqn.2012-02.com.uplooking:node4.target1
[root@node1 ~]# iscsiadm -m discovery -t sendtargets -p 172.16.1.5:3260
172.16.1.5:3260,1 iqn.2012-02.com.uplooking:node5.target1
[root@node1 ~]# iscsiadm -m node -T iqn.2012-02.com.uplooking:node4.target1 -l
Logging in to [iface: default, target: iqn.2012-02.com.uplooking:node4.target1, portal: 172.16.1.4,3260]
Login to [iface: default, target: iqn.2012-02.com.uplooking:node4.target1, portal: 172.16.1.4,3260]: successful
[root@node1 ~]# iscsiadm -m node -T iqn.2012-02.com.uplooking:node5.target1 -l
Logging in to [iface: default, target: iqn.2012-02.com.uplooking:node5.target1, portal: 172.16.1.5,3260]
Login to [iface: default, target: iqn.2012-02.com.uplooking:node5.target1, portal: 172.16.1.5,3260]: successful
[root@node1 ~]# fdisk -l

Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14        2610    20860402+  8e  Linux LVM

Disk /dev/sdb: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table

Disk /dev/sdc: 524 MB, 524288000 bytes
17 heads, 59 sectors/track, 1020 cylinders
Units = cylinders of 1003 * 512 = 513536 bytes

Disk /dev/sdc doesn't contain a valid partition table

Disk /dev/sdd: 524 MB, 524288000 bytes
17 heads, 59 sectors/track, 1020 cylinders
Units = cylinders of 1003 * 512 = 513536 bytes

Disk /dev/sdd doesn't contain a valid partition table


4.集群节点node1,node2使用udev创建设备的别名
[root@node1 ~]# udevinfo -a -p /sys/block/sdc
[root@node1 ~]# udevinfo -a -p /sys/block/sdd
[root@node1 ~]# vim /etc/udev/rules.d/80-iscsi.rules
[root@node1 ~]# cat /etc/udev/rules.d/80-iscsi.rules
SUBSYSTEM=="block", SYSFS{size}=="1024000", SYSFS{model}=="storage4", SYSFS{vendor}=="node4", SYMLINK="iscsi/node4"
SUBSYSTEM=="block", SYSFS{size}=="1024000", SYSFS{model}=="storage5", SYSFS{vendor}=="node5", SYMLINK="iscsi/node5"
[root@node1 ~]# start_udev 
Starting udev:                                             [  OK  ]
[root@node1 ~]# ll /dev/iscsi/
total 0
lrwxrwxrwx 1 root root 6 Feb 29 00:49 node4 -> ../sdc
lrwxrwxrwx 1 root root 6 Feb 29 00:49 node5 -> ../sdd


5.集群节点node1,node2,使用存储创建lvm 并创建GFS2文件系统,挂载/iscsi成功
[root@node1 ~]# pvcreate /dev/iscsi/node4 
[root@node1 ~]# pvcreate /dev/iscsi/node5
[root@node1 ~]# vgcreate vg-iscsi /dev/iscsi/node5 /dev/iscsi/node4
[root@node1 ~]# lvcreate -l 125 -n lv-iscsi vg-iscsi

[root@node1 ~]# yum install gfs2-utils kmod-gfs
[root@node1 ~]# modprobe gfs2
[root@node1 ~]# lsmod | grep gfs2
gfs2                  349833  1 lock_dlm

[root@node1 ~]# mkfs.gfs2 -t iscsi_cluster:table1 -p lock_dlm -j 2 /dev/vg-iscsi/lv-iscsi 
This will destroy any data on /dev/vg-iscsi/lv-iscsi.

Are you sure you want to proceed? [y/n] y

Device:                    /dev/vg-iscsi/lv-iscsi
Blocksize:                 4096
Device Size                0.49 GB (128000 blocks)
Filesystem Size:           0.49 GB (127997 blocks)
Journals:                  2
Resource Groups:           2
Locking Protocol:          "lock_dlm"
Lock Table:                "iscsi_cluster:table1"
UUID:                      E010CF07-13CF-F783-0A9A-8DB10E6D3444

[root@node1 ~]# mkdir /iscsi
[root@node1 ~]# mount -t gfs2 /dev/vg-iscsi/lv-iscsi /iscsi
[root@node1 ~]# echo "iscsi test" > /iscsi/file1

[root@node2 ~]# pvscan 
  Couldn't find device with uuid 'fOykMs-ByjL-X0Zh-oKOW-D8Yc-ZenO-fQ6AHJ'.
  PV /dev/sdd         VG vg-iscsi     lvm2 [496.00 MB / 0    free]
  PV /dev/sdc         VG vg-iscsi     lvm2 [496.00 MB / 492.00 MB free]
  PV /dev/sda2        VG VolGroup00   lvm2 [19.88 GB / 0    free]
  Total: 5 [60.84 GB] / in use: 5 [60.84 GB] / in no VG: 0 [0   ]

[root@node2 ~]# vgchange -ay vg-iscsi
  1 logical volume(s) in volume group "vg-iscsi" now active
[root@node2 ~]# mount -t gfs2 /dev/vg-iscsi/lv-iscsi /iscsi/
[root@node2 ~]# cat /iscsi/file1 
iscsi test


6.扩展存储节点node6,集群节点node1 node2发现并登录node6存储,使用udev给node6存储创建别名,并在线扩容lv-iscsi 1G
[root@node6 ~]# yum install scsi-target-utils
[root@node6 ~]# mkdir /iscsi
[root@node6 ~]# dd if=/dev/zero of=/iscsi/disk-node6 bs=1M count=5000
[root@node6 ~]# vim /etc/tgt/targets.conf 
# Set the driver. If not specified, defaults to "iscsi".

default-driver iscsi


# Continue if tgtadm exits with non-zero code (equivalent of
# --ignore-errors command line option)
#ignore-errors yes


# Sample target with one LUN only. Defaults to allow access for all initiators:

<target iqn.2012-02.com.uplooking:node6.target1>
    backing-store /iscsi/disk-node6
    write-cache off
    vendor_id node6
    product_id storage6
    initiator-address 172.16.1.1
    initiator-address 172.16.1.2
</target>

[root@node6 ~]# service tgtd start
Starting SCSI target daemon: Starting target framework daemon

[root@node6 ~]# tgt-admin --show


[root@node1 ~]# iscsiadm -m discovery -t sendtargets -p 172.16.1.6:3260
[root@node1 ~]# iscsiadm -m node -T iqn.2012-02.com.uplooking:node6.target1 -l
[root@node1 ~]# udevinfo -a -p /sys/block/sde
[root@node1 ~]# cat /etc/udev/rules.d/80-iscsi.rules
SUBSYSTEM=="block", SYSFS{size}=="1024000", SYSFS{model}=="storage4", SYSFS{vendor}=="node4", SYMLINK="iscsi/node4"
SUBSYSTEM=="block", SYSFS{size}=="1024000", SYSFS{model}=="storage5", SYSFS{vendor}=="node5", SYMLINK="iscsi/node5"
SUBSYSTEM=="block", SYSFS{size}=="2048000", SYSFS{model}=="storage6", SYSFS{vendor}=="node6", SYMLINK="iscsi/node6"
[root@node1 ~]# start_udev 
Starting udev:                                             [  OK  ]
[root@node1 ~]# ll /dev/iscsi/
total 0
lrwxrwxrwx 1 root root 6 Feb 29 00:49 node4 -> ../sdc
lrwxrwxrwx 1 root root 6 Feb 29 00:49 node5 -> ../sdd
lrwxrwxrwx 1 root root 6 Feb 29 01:43 node6 -> ../sde


[root@node1 ~]# pvcreate /dev/iscsi/node6 
  Physical volume "/dev/iscsi/node6" successfully created
[root@node1 ~]# vgextend vg-iscsi /dev/iscsi/node6
  /dev/cdrom: open failed: Read-only file system
  /dev/cdrom: open failed: Read-only file system
  Attempt to close device '/dev/cdrom' which is not open.
  Volume group "vg-iscsi" successfully extended
[root@node1 ~]# lvextend -l 1246 /dev/vg-iscsi/lv-iscsi 
  /dev/cdrom: open failed: Read-only file system
  Extending logical volume lv-iscsi to 1000.00 MB
  Logical volume lv-iscsi successfully resized

[root@node1 ~]# df -h /iscsi
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg--iscsi-lv--iscsi
                      500M  259M  242M  52% /iscsi

[root@node1 ~]# gfs2_grow -v /iscsi

[root@node1 ~]# df -h /iscsi/
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg--iscsi-lv--iscsi
                      4.4G  259M  4.2G   6% /iscsi


7.扩展集群节点node3
    修改存储节点node4,node5,node6配置文件,并在node3上发现并登录成功,使用udev设置存储别名
[root@node4 ~]# vim /etc/tgt/targets.conf    

    initiator-address 172.16.1.1
    initiator-address 172.16.1.2
    initiator-address 172.16.1.3

[root@node4 ~]# tgt-admin --update ALL --force
[root@node4 ~]# tgt-admin --show

[root@node3 ~]# yum install iscsi-initiator-utils
[root@node3 ~]# service iscsi start
[root@node3 ~]# iscsiadm -m discovery -t sendtargets -p 172.16.1.4:3260
[root@node3 ~]# iscsiadm -m discovery -t sendtargets -p 172.16.1.5:3260
[root@node3 ~]# iscsiadm -m discovery -t sendtargets -p 172.16.1.6:3260

[root@node3 ~]# iscsiadm -m node -T iqn.2012-02.com.uplooking:node4.target1 -l
[root@node3 ~]# iscsiadm -m node -T iqn.2012-02.com.uplooking:node5.target1 -l
[root@node3 ~]# iscsiadm -m node -T iqn.2012-02.com.uplooking:node6.target1 -l


[root@node3 ~]# scp node1:/etc/udev/rules.d/80-iscsi.rules /etc/udev/rules.d/
[root@node3 ~]# start_udev 
Starting udev:                                             [  OK  ]
[root@node3 ~]# ll /dev/iscsi/
total 0
lrwxrwxrwx 1 root root 6 Feb 29 02:26 node4 -> ../sdb
lrwxrwxrwx 1 root root 6 Feb 29 02:25 node5 -> ../sdc
lrwxrwxrwx 1 root root 6 Feb 29 02:25 node6 -> ../sdd

    使node3加入集群,并挂载存储成功
[root@node3 ~]# pvscan 

[root@node3 ~]# vgchange -ay vg-iscsi

[root@node3 ~]# yum install gfs-utils kmod-gfs

[root@node3 ~]# yum install gfs-utils kmod-gfs
[root@node3 ~]# mkdir /iscsi
[root@node3 ~]# mount -t gfs2 /dev/vg-iscsi/lv-iscsi /iscsi/
/sbin/mount.gfs2: can't connect to gfs_controld: Connection refused
/sbin/mount.gfs2: can't connect to gfs_controld: Connection refused
/sbin/mount.gfs2: can't connect to gfs_controld: Connection refused
/sbin/mount.gfs2: can't connect to gfs_controld: Connection refused
/sbin/mount.gfs2: can't connect to gfs_controld: Connection refused
/sbin/mount.gfs2: can't connect to gfs_controld: Connection refused
/sbin/mount.gfs2: can't connect to gfs_controld: Connection refused
/sbin/mount.gfs2: can't connect to gfs_controld: Connection refused
/sbin/mount.gfs2: can't connect to gfs_controld: Connection refused
/sbin/mount.gfs2: can't connect to gfs_controld: Connection refused
/sbin/mount.gfs2: gfs_controld not running
/sbin/mount.gfs2: error mounting lockproto lock_dlm

===================================================================
[root@node1 ~]# vim /etc/cluster/cluster.conf 
[root@node1 ~]# cat /etc/cluster/cluster.conf
<?xml version="1.0" ?>
<cluster config_version="2" name="iscsi_cluster">
        <fence_daemon post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="node1.uplooking.com" nodeid="1" votes="1">
                        <fence/>
                </clusternode>
                <clusternode name="node2.uplooking.com" nodeid="2" votes="1">
                        <fence/>
                </clusternode>
                <clusternode name="node3.uplooking.com" nodeid="3" votes="1">
                        <fence/>
                </clusternode>
        </clusternodes>
        <cman expected_votes="1" two_node="1"/>
        <fencedevices/>
        <rm>
                <failoverdomains/>
                <resources/>
        </rm>
</cluster>
[root@node1 ~]# scp /etc/cluster/cluster.conf node3:/etc/cluster/

[root@node3 ~]# yum install cman openais
[root@node3 ~]# ls /etc/cluster/
cluster.conf
[root@node3 ~]# service cman start
Starting cluster: 
   Loading modules... done
   Mounting configfs... done
   Starting ccsd... done
   Starting cman... failed
cman not started: Can't find local node name in cluster.conf /usr/sbin/cman_tool: aisexec daemon didn't start
                                                           [FAILED]
[root@node3 ~]# cat /etc/cluster/cluster.conf 
<?xml version="1.0"?>
<cluster config_version="2" name="iscsi_cluster">
        <fence_daemon post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="node1.uplooking.com" nodeid="1" votes="1">
                        <fence/>
                </clusternode>
                <clusternode name="node2.uplooking.com" nodeid="2" votes="1">
                        <fence/>
                </clusternode>
        </clusternodes>
        <cman expected_votes="1" two_node="1"/>
        <fencedevices/>
        <rm>
                <failoverdomains/>
                <resources/>
        </rm>
</cluster>


===================================================================================

[root@node1 ~]# vim /etc/cluster/cluster.conf

[root@node1 ~]# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="3" name="iscsi_cluster">
        <fence_daemon post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="node1.uplooking.com" nodeid="1" votes="1">
                        <fence/>
                </clusternode>
                <clusternode name="node2.uplooking.com" nodeid="2" votes="1">
                        <fence/>
                </clusternode>
                <clusternode name="node3.uplooking.com" nodeid="3" votes="1">
                        <fence/>
                </clusternode>
        </clusternodes>
        <cman expected_votes="1" two_node="1"/>
        <fencedevices/>
        <rm>
                <failoverdomains/>
                <resources/>
        </rm>
</cluster>

[root@node1 ~]# ccs_tool update /etc/cluster/cluster.conf 
Config file updated from version 2 to 3

Update complete.

[root@node3 ~]# service cman start
Starting cluster: 
   Loading modules... done
   Mounting configfs... done
   Starting ccsd... done
   Starting cman... failed
cman not started: two_node set but there are more than 2 nodes /usr/sbin/cman_tool: aisexec daemon didn't start
                                                           [FAILED]
================================================================================================


[root@node1 ~]# vim /etc/cluster/cluster.conf
[root@node1 ~]# 
[root@node1 ~]# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="4" name="iscsi_cluster">
        <fence_daemon post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="node1.uplooking.com" nodeid="1" votes="1">
                        <fence/>
                </clusternode>
                <clusternode name="node2.uplooking.com" nodeid="2" votes="1">
                        <fence/>
                </clusternode>
                <clusternode name="node3.uplooking.com" nodeid="3" votes="1">
                        <fence/>
                </clusternode>
        </clusternodes>
        <cman expected_votes="1" />
        <fencedevices/>
        <rm>
                <failoverdomains/>
                <resources/>
        </rm>
</cluster>
[root@node1 ~]# ccs_tool update /etc/cluster/cluster.conf
Config file updated from version 3 to 4

Update complete.

[root@node3 ~]# service cman start
Starting cluster: 
   Loading modules... done
   Mounting configfs... done
   Starting ccsd... done
   Starting cman... done
   Starting daemons... done
   Starting fencing... done
                                                           [  OK  ]
[root@node3 ~]# cman_tool status
Version: 6.2.0
Config Version: 4
Cluster Name: iscsi_cluster
Cluster Id: 26292
Cluster Member: Yes
Cluster Generation: 12
Membership state: Cluster-Member
Nodes: 3
Expected votes: 1
Total votes: 3
Quorum: 2  
Active subsystems: 7
Flags: Dirty 
Ports Bound: 0  
Node name: node3.uplooking.com
Node ID: 3
Multicast addresses: 239.192.102.27 
Node addresses: 172.16.1.3 
==================================================================


[root@node3 ~]# mount -t gfs2 /dev/vg-iscsi/lv-iscsi /iscsi/
/sbin/mount.gfs2: error mounting /dev/mapper/vg--iscsi-lv--iscsi on /iscsi: Invalid argument

[root@node3 ~]# cat /var/log/messages 

Feb 29 02:45:54 node3 kernel: GFS2: fsid=: Trying to join cluster "lock_dlm", "iscsi_cluster:table1"
Feb 29 02:45:54 node3 kernel: dlm: Using TCP for communications
Feb 29 02:45:54 node3 kernel: dlm: got connection from 1
Feb 29 02:45:54 node3 kernel: dlm: got connection from 2
Feb 29 02:45:54 node3 kernel: GFS2: fsid=iscsi_cluster:table1.2: Joined cluster. Now mounting FS...
Feb 29 02:45:55 node3 kernel: GFS2: fsid=iscsi_cluster:table1.2: can't mount journal #2
Feb 29 02:45:55 node3 kernel: GFS2: fsid=iscsi_cluster:table1.2: there are only 2 journals (0 - 1)

[root@node1 ~]# gfs2_tool journals /iscsi
journal1 - 128MB
journal0 - 128MB
2 journal(s) found.

[root@node1 ~]# gfs2_jadd -j 1 /iscsi
Filesystem:            /iscsi
Old Journals           2
New Journals           3

[root@node3 ~]# mount -t gfs2 /dev/vg-iscsi/lv-iscsi /iscsi/
[root@node3 ~]# cat /iscsi/file1 
iscsi test

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/334867.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

STM32建立工程问题汇总

老版本MDK&#xff0c;例如MDK4 工程内容如下&#xff1a; User文件夹中存放main.c文件&#xff0c;用户中断服务函数&#xff08;stm32f1xx.it.c&#xff09;&#xff0c;用户配置文件&#xff08;stm32f1xx_hal_conf.h&#xff09;等用户程序文件&#xff0c;或者mdk启动程序…

Spring Cloud Gateway 网关

一. 什么是网关&#xff08;Gateway&#xff09; 网关就是一个网络连接到另一个网络的关口。 在同一个项目或某一层级中&#xff0c;存在相似或重复的东西&#xff0c;我们就可以将这些相似重复的内容统一提取出来&#xff0c;向前或向后抽象成单独的一层。这个抽象的过程就是…

AURIX TC3xx单片机介绍-启动过程介绍3

如下的内容是英文为主,对于TC3xx芯片启动原理不清楚的,可以给我留言,我来解答你们的问题! 3.2.1 Reset类型识别 Reset类型的识别是用来判断上次的复位是Application Reset还是System Reset还是CPU0 Reset。基于复位的原因,启动软件会运行不同的分支逻辑。复位原因可以通…

常用目标检测预训练模型大小及准确度比较

目标检测是计算机视觉领域中的一项重要任务&#xff0c;旨在检测和定位图像或者视频中的目标对象。当人类观看图像或视频时&#xff0c;我们可以在瞬间识别和定位感兴趣的对象。目标检测的目标是使用计算机复制这种智能。 近年来&#xff0c;目标检测网络的发展日益成熟&#…

Java GC问题排查的一些个人总结和问题复盘

个人博客 Java GC问题排查的一些个人总结和问题复盘 | iwts’s blog 是否存在GC问题判断指标 有的比较明显&#xff0c;比如发布上线后内存直接就起飞了&#xff0c;这种也是比较好排查的&#xff0c;也是最多的。如果单纯从优化角度&#xff0c;看当前应用是否需要优化&…

【PB案例学习笔记】-11动画显示窗口

写在前面 这是PB案例学习笔记系列文章的第11篇&#xff0c;该系列文章适合具有一定PB基础的读者。 通过一个个由浅入深的编程实战案例学习&#xff0c;提高编程技巧&#xff0c;以保证小伙伴们能应付公司的各种开发需求。 文章中设计到的源码&#xff0c;小凡都上传到了gite…

IDEA2024创建maven项目

1、new->project 2、创建后展示 3、生成resources文件夹 4、测试--编写一个hello文件

5.28学习总结

java复习总结 hashcode()和equals() hashcode():在Object里这个方法是通过返回地址的整数值来生成哈希值。 equals():在Object里这个方法是通过比较他们的内存地址来确定两个对象是否相同。 运行效率&#xff1a;hashcode的时间复杂度为O(1)&#xff08;因为只要计算一次哈…

养老院管理系统基于springboot的养老院管理系统java项目

文章目录 养老院管理系统一、项目演示二、项目介绍三、系统部分功能截图四、部分代码展示五、底部获取项目源码&#xff08;9.9&#xffe5;带走&#xff09; 养老院管理系统 一、项目演示 养老院管理系统 二、项目介绍 基于springboot的养老院管理系统 角色&#xff1a;超级…

Python-3.12.0文档解读-内置函数ord()详细说明+记忆策略+常用场景+巧妙用法+综合技巧

一个认为一切根源都是“自己不够强”的INTJ 个人主页&#xff1a;用哲学编程-CSDN博客专栏&#xff1a;每日一题——举一反三Python编程学习Python内置函数 Python-3.12.0文档解读 目录 详细说明 概述 语法 参数 返回值 示例 注意事项 应用场景 记忆策略 常用场景…

网易面试:手撕定时器

概述&#xff1a; 本文使用STL容器-set以及Linux提供的timerfd来实现定时器组件 所谓定时器就是管理大量定时任务&#xff0c;使其能按照超时时间有序地被执行 需求分析&#xff1a; 1.数据结构的选择&#xff1a;存储定时任务 2.驱动方式&#xff1a;如何选择一个任务并执…

微火问答:全域外卖和本地生活服务是同个项目吗?

当前&#xff0c;本地生活赛道火爆程度不断升级&#xff0c;作为其主要板块之一的团购外卖也持续迸发出新的活力。而全域运营的出现无疑是给团购外卖这把正在熊熊燃烧的烈火&#xff0c;又添了一把新柴&#xff01; 所谓全域运营&#xff0c;简单来说&#xff0c;就是指所有领…

xjar加密springboot的jar包,并编译为执行程序

场景&#xff1a;当前项目需要进行jar包部署在windows环境和linux环境&#xff0c;并要求使用xjar加密。 1. xjar加密 源码程序自行搜索&#xff0c;这里只介绍加密及运行&#xff0c;运行加密程序&#xff0c;指定jar包&#xff0c;输入密码 2. 加密后的目录 3. go程序编译 …

HCIP-Datacom-ARST自选题库__BGP/MPLS IP VPN简答【3道题】

1.在BGP/MPLSIPVPN场景中&#xff0c;如果PE设备收到到达同一目的网络的多条路由时&#xff0c;将按照定的顺序选择最优路由。请将以下内容按照比较顺序进行排序。 2.在如图所示的BGP/MPLSIP VPN网络中&#xff0c;管理员准备通过Hub-Spoke组网实现H站点对VPM流量的集中管控&am…

HNU-人工智能-作业3

人工智能-作业3 计科210X 甘晴void 202108010XXX 1.贝叶斯网络 根据图所给出的贝叶斯网络&#xff0c;其中&#xff1a;P(A)0.5&#xff0c;P(B|A)1&#xff0c; P(B|A)0.5&#xff0c; P(C|A)1&#xff0c; P(C|A)0.5&#xff0c;P(D|BC)1&#xff0c;P(D|B, C)0.5&#xff…

如何将天猫内容保存为PDF格式?详细步骤与实战解析

新书上架~&#x1f447;全国包邮奥~ python实用小工具开发教程http://pythontoolsteach.com/3 欢迎关注我&#x1f446;&#xff0c;收藏下次不迷路┗|&#xff40;O′|┛ 嗷~~ 目录 一、引言&#xff1a;保存天猫内容的重要性 二、环境准备与工具安装 1. 安装必要的Python包…

宝塔部署Java+Vue前后端分离项目

1. 服务器 服务器选择Linux的CentOS7的版本 2. 宝塔Linux面板 2.1 百度搜索宝塔 2.2 进去之后点击立即免费安装 2.3 选择Linux在线安装&#xff0c;输入服务器信息进行安装(也可以选择其他方式) 安装完成之后会弹一个宝塔的应用面板&#xff0c;并附带有登录名称和密码&…

Linux快速定位日志 排查bug技巧和常用命令

1. 快速根据关键字定位错误信息 grep 在 Linux 系统中&#xff0c;可以使用 grep 命令来查找日志文件中包含特定关键字的行。假设你的日志文件路径为 /var/log/myapp.log&#xff0c;你想要查找包含关键字 "abc" 的日志内容&#xff0c;可以按照以下步骤操作&#…

【机器学习聚类算法实战-5】机器学习聚类算法之DBSCAN聚类、K均值聚类算法、分层聚类和不同度量的聚集聚类实例分析

&#x1f3a9; 欢迎来到技术探索的奇幻世界&#x1f468;‍&#x1f4bb; &#x1f4dc; 个人主页&#xff1a;一伦明悦-CSDN博客 ✍&#x1f3fb; 作者简介&#xff1a; C软件开发、Python机器学习爱好者 &#x1f5e3;️ 互动与支持&#xff1a;&#x1f4ac;评论 &…

【微服务】安装docker以及可视化界面

1.配置yum下载源为aliyun源 yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo2.下载docker不加版本号默认为最新版本 yum install -y docker-ce3.启动以及开机自启 #启动docker命令 systemctl start docker #设置开机自启命令…