Oracle 11g rac + Dataguard 环境调整 redo log 大小

Oracle 11g rac + Dataguard 环境调整 redo log 大小

目录

  • Oracle 11g rac + Dataguard 环境调整 redo log 大小
    • 一、问题的产生
        • 1、查看日志文件、日志组信息
        • 2、日志切换较快出现的问题
    • 二、 确认 DG 同步是否正常
        • 1、检查数据库角色
        • 2、查看进程信息
        • 3、检查归档是否一致
        • 4、查看 GAP 是否有延迟
    • 三、在主库上重建 redo log file
        • 1、查看 redo 日志组及大小
        • 2、在主库删除 standby log
        • 3、在主库替换 redo log
        • 4、主库添加 standby log
    • 四、在备库上重建 standby log
        • 步骤1:查看 redo 日志组及大小
        • 步骤2:备库取消日志应用
        • 步骤3:修改日志管理模式为手动
        • 步骤4:删除备库 standby log
        • 步骤5:备库新建 standby log
        • 步骤6:主库切日志,将临时`standby logfile`切到`UNASSIGNED`状态,然后删除:
        • 步骤7:打开实时应用日志和自动日志管理
        • 步骤8:重新启动备库,查看数据同步情况

一、问题的产生

客户的生产库为 Oracle 11g rac双节点集群,并且配置有Dataguard,近期业务出现卡顿,检查发现日志文件异常,系统一共配置6redo log1号线程与2号线程各3组,每个日志文件的大小为100MB)与8standby log1号线程与2号线程各4组)。

1、查看日志文件、日志组信息

(1)日志文件信息

SQL> select * from v$logfile;GROUP# STATUS  TYPE MEMBER   IS_
-----------------------------------------------------------------------------------1	   ONLINE  +DATA/hisdb/onlinelog/redo01.log  NO2	   ONLINE  +DATA/hisdb/onlinelog/redo02.log  NO4	   ONLINE  +DATA/hisdb/onlinelog/redo04.log  NO5	   ONLINE  +DATA/hisdb/onlinelog/redo05.log  NO6	   ONLINE  +DATA/hisdb/onlinelog/redo06.log  NO3	   ONLINE  +DATA/hisdb/onlinelog/redo03.log  NO7	   STANDBY  +DATA/hisdb/onlinelog/group_7.446.1121009477   NO8	   STANDBY  +DATA/hisdb/onlinelog/group_8.447.1121009483   NO9	   STANDBY  +DATA/hisdb/onlinelog/group_9.448.1121009489   NO10	   STANDBY  +DATA/hisdb/onlinelog/group_10.449.1121009493  NO11	   STANDBY  +DATA/hisdb/onlinelog/group_11.450.1121009499  NO12	   STANDBY  +DATA/hisdb/onlinelog/group_12.451.1121009507  NO13	   STANDBY  +DATA/hisdb/onlinelog/group_13.452.1121009507  NO14	   STANDBY  +DATA/hisdb/onlinelog/group_14.453.1121009507  NO
14 rows selected.

(2)日志组信息

SQL> select group#,thread#,sequence#,round(bytes/1024/1024,2) size_mb,members,archived,status,first_change#,first_time from v$log;GROUP#    THREAD#  SEQUENCE#    SIZE_MB    MEMBERS ARC STATUS  FIRST_CHANGE# FIRST_TIME
------------- ---------------------- ---------- ---------- ---------- ---------- --- ----------1	    2	  114460	100	     1 YES ACTIVE    9057709636 03-JAN-252	    2	  114461	100	     1 YES ACTIVE    9057751045 03-JAN-253	    2	  114462	100	     1 NO  CURRENT   9057800557 03-JAN-254	    1	  324121	100	     1 YES ACTIVE    9057794541 03-JAN-255	    1	  324122	100	     1 NO  CURRENT   9057807196 03-JAN-256	    1	  324120	100	     1 YES ACTIVE   9057790272 03-JAN-25
6 rows selected.

(3)standby log信息

SQL> select group#,thread#,round(bytes/1024/1024,2) size_mb,status from v$standby_log;GROUP#    THREAD#	 SIZE_MB STATUS
---------- ---------- ---------- ----------7	    1	     100 UNASSIGNED8	    1	     100 UNASSIGNED9	    1	     100 UNASSIGNED10	    1	     100 UNASSIGNED11	    2	     100 UNASSIGNED12	    2	     100 UNASSIGNED13	    2	     100 UNASSIGNED14	    2	     100 UNASSIGNED
8 rows selected.
2、日志切换较快出现的问题

频繁的日志切换会增加CPUI/O负载,因为每次切换都需要更新控制文件和数据字典,并且产生新的归档。

日志组循环写满以后,LGWR进程要覆盖先前的日志文件,如果未完成归档会导致无法切换,出现等待,数据库将陷于停顿状态,直到要覆盖的日志文件完成归档。

在生产环境中,设置一个相对合理的redo log大小是十分重要的,可以提升数据库的整体性能,减轻系统I/O负担,数据库恢复时间合理。

理想性状态下,平均一个小时切换2-4次较为合理。

使用以下脚本可以查看日志每小时切换次数,平均每小时2-4次合适,也就是15-30分钟切一次日志。

set linesize 120
set pagesize 100
column  day     format a15              heading 'Day'
column  d_0     format a3               heading '00'
column  d_1     format a3               heading '01'
column  d_2     format a3               heading '02'
column  d_3     format a3               heading '03'
column  d_4     format a3               heading '04'
column  d_5     format a3               heading '05'
column  d_6     format a3               heading '06'
column  d_7     format a3               heading '07'
column  d_8     format a3               heading '08'
column  d_9     format a3               heading '09'
column  d_10    format a3               heading '10'
column  d_11    format a3               heading '11'
column  d_12    format a3               heading '12'
column  d_13    format a3               heading '13'
column  d_14    format a3               heading '14'
column  d_15    format a3               heading '15'
column  d_16    format a3               heading '16'
column  d_17    format a3               heading '17'
column  d_18    format a3               heading '18'
column  d_19    format a3               heading '19'
column  d_20    format a3               heading '20'
column  d_21    format a3               heading '21'
column  d_22    format a3               heading '22'
column  d_23    format a3               heading '23'
selectsubstr(to_char(FIRST_TIME,'YYYY/MM/DD,DY'),1,15) day,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'00',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'00',1,0))) d_0,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'01',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'01',1,0))) d_1,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'02',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'02',1,0))) d_2,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'03',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'03',1,0))) d_3,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'04',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'04',1,0))) d_4,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'05',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'05',1,0))) d_5,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'06',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'06',1,0))) d_6,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'07',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'07',1,0))) d_7,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'08',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'08',1,0))) d_8,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'09',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'09',1,0))) d_9,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'10',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'10',1,0))) d_10,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'11',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'11',1,0))) d_11,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'12',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'12',1,0))) d_12,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'13',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'13',1,0))) d_13,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'14',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'14',1,0))) d_14,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'15',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'15',1,0))) d_15,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'16',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'16',1,0))) d_16,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'17',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'17',1,0))) d_17,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'18',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'18',1,0))) d_18,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'19',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'19',1,0))) d_19,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'20',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'20',1,0))) d_20,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'21',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'21',1,0))) d_21,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'22',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'22',1,0))) d_22,decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'23',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'23',1,0))) d_23
fromgv$log_history where first_time> sysdate-60
group bysubstr(to_char(FIRST_TIME,'YYYY/MM/DD,DY'),1,15)
order bysubstr(to_char(FIRST_TIME,'YYYY/MM/DD,DY'),1,15) desc;-- 运行结果如下:
Day	00  01	02  03	04  05	06  07	08  09	10  11	12  13	14  15	16  17	18  19	20  21	22  23
------- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
2025/01/02,THU	-   -	-   -	-   -	1   -	-   -	-   -	-   -	5   -	-   -	-   -	-   -	1   -
2025/01/01,WED	-   -	-   -	1   -	-   -	-   -	-   -	-   -	-   -	-   2	-   1	1   1	-   -
2024/12/31,TUE	-   -	-   -	-   -	-   -	-   -	-   -	-   -	-   -	1   -	-   -	-   -	-   -
2024/12/30,MON	-   -	-   -	-   -	-   -	-   -	-   -	-   -	-   -	-   -	-   -	-   4	2   -

经过分析日志的切换频率,拟把日志文件大小扩容为1024MB,操作步骤如下:

二、 确认 DG 同步是否正常

1、检查数据库角色
--主库
SQL> select db_unique_name, open_mode, switchover_status, database_role from v$database;DB_UNIQUE_NAME		       OPEN_MODE	    SWITCHOVER_STATUS   DATABASE_ROLE
---------------------------------------------- -------------------- --------------------
HISDB			       READ WRITE	    SESSIONS ACTIVE          PRIMARY--备库
SQL> select db_unique_name, open_mode, switchover_status, database_role from v$database;DB_UNIQUE_NAME		       OPEN_MODE	    SWITCHOVER_STATUS   DATABASE_ROLE
---------------------------------------------- -------------------- --------------------
DGHISDB 		       READ ONLY WITH APPLY NOT ALLOWED   PHYSICAL STANDBY
2、查看进程信息

(1)在主库查看LNS进程,此进程负责将主数据库的重做日志条目传输到备用数据库。

SQL> select process, status, sequence# from v$managed_standby;PROCESS   STATUS	SEQUENCE#
--------- ------------ ----------
ARCH	  CLOSING	   324114
ARCH	  CLOSING	   317879
ARCH	  CLOSING	   324115
ARCH	  CLOSING	   324116
LNS	  WRITING	   324117

(2)在备库查看MRP0进程,此进程负责将接收到的归档日志应用到备用数据库上,以维持与主数据库的同步。MRP进程是DG中的关键组件,它确保备用数据库的数据与主数据库保持一致。

SQL> select process, status, sequence# from v$managed_standby;PROCESS   STATUS	SEQUENCE#
--------- ------------ ----------
ARCH	  CLOSING	   324117
ARCH	  CLOSING	   114458
ARCH	  CONNECTED		0
ARCH	  CLOSING	   324116
RFS	  IDLE		   324118
RFS	  IDLE			0
RFS	  IDLE			0
RFS	  IDLE			0
MRP0	  APPLYING_LOG	   324118
RFS	  IDLE			0
RFS	  IDLE			0
RFS	  IDLE		   114459
RFS	  IDLE			013 rows selected.
3、检查归档是否一致
-- 主库
SQL> select max(sequence#),thread# from v$archived_log where RESETLOGS_CHANGE# = (SELECT RESETLOGS_CHANGE# FROM V$DATABASE_INCARNATION WHERE STATUS = 'CURRENT') GROUP BY THREAD#;MAX(SEQUENCE#)	  THREAD#
-------------- ----------324117		1114458		2-- 备库
SQL> select max(sequence#),thread# from v$archived_log where RESETLOGS_CHANGE# = (SELECT RESETLOGS_CHANGE# FROM V$DATABASE_INCARNATION WHERE STATUS = 'CURRENT') GROUP BY THREAD#;MAX(SEQUENCE#)	  THREAD#
-------------- ----------324117		1114458		2
4、查看 GAP 是否有延迟

GAP产生的原因是,一般是备库已经长时间未与主库同步,等发现的时候,主库的归档日志已经删除,备库无法再次与主库同步,这时候GAP就产生了。

-- 主库
SQL> select * from v$archive_gap;no rows selectedSQL> select STATUS, GAP_STATUS from V$ARCHIVE_DEST_STATUS where DEST_ID = 2;STATUS	  GAP_STATUS
--------- ------------------------
VALID	  NO GAP-- 备库
SQL> select * from v$archive_gap;no rows selectedSQL> select STATUS, GAP_STATUS from V$ARCHIVE_DEST_STATUS where DEST_ID = 2;STATUS	  GAP_STATUS
--------- ------------------------
VALID	  RESOLVABLE GAP

三、在主库上重建 redo log file

1、查看 redo 日志组及大小

(1)查看日志文件

SQL> select * from v$logfile;GROUP# STATUS  TYPE MEMBER   IS_
-----------------------------------------------------------------------------------1	   ONLINE  +DATA/hisdb/onlinelog/redo01.log  NO2	   ONLINE  +DATA/hisdb/onlinelog/redo02.log  NO4	   ONLINE  +DATA/hisdb/onlinelog/redo04.log  NO5	   ONLINE  +DATA/hisdb/onlinelog/redo05.log  NO6	   ONLINE  +DATA/hisdb/onlinelog/redo06.log  NO3	   ONLINE  +DATA/hisdb/onlinelog/redo03.log  NO7	   STANDBY  +DATA/hisdb/onlinelog/group_7.446.1121009477   NO8	   STANDBY  +DATA/hisdb/onlinelog/group_8.447.1121009483   NO9	   STANDBY  +DATA/hisdb/onlinelog/group_9.448.1121009489   NO10	   STANDBY  +DATA/hisdb/onlinelog/group_10.449.1121009493  NO11	   STANDBY  +DATA/hisdb/onlinelog/group_11.450.1121009499  NO12	   STANDBY  +DATA/hisdb/onlinelog/group_12.451.1121009507  NO13	   STANDBY  +DATA/hisdb/onlinelog/group_13.452.1121009507  NO14	   STANDBY  +DATA/hisdb/onlinelog/group_14.453.1121009507  NO
14 rows selected.

(2)查看redo log信息

SQL> select group#,thread#,sequence#,round(bytes/1024/1024,2) size_mb,members,archived,status,first_change#,first_time from v$log;GROUP#    THREAD#  SEQUENCE#    SIZE_MB    MEMBERS ARC STATUS  FIRST_CHANGE# FIRST_TIME
------------- ---------------------- ---------- ---------- ---------- ---------- --- ----------1	    2	  114460	100	     1 YES ACTIVE    9057709636 03-JAN-252	    2	  114461	100	     1 YES ACTIVE    9057751045 03-JAN-253	    2	  114462	100	     1 NO  CURRENT   9057800557 03-JAN-254	    1	  324121	100	     1 YES ACTIVE    9057794541 03-JAN-255	    1	  324122	100	     1 NO  CURRENT   9057807196 03-JAN-256	    1	  324120	100	     1 YES ACTIVE   9057790272 03-JAN-25
6 rows selected.

(3)查看standby log信息

SQL> select group#,thread#,round(bytes/1024/1024,2) size_mb,status from v$standby_log;GROUP#    THREAD#	 SIZE_MB STATUS
---------- ---------- ---------- ----------7	    1	     100 UNASSIGNED8	    1	     100 UNASSIGNED9	    1	     100 UNASSIGNED10	    1	     100 UNASSIGNED11	    2	     100 UNASSIGNED12	    2	     100 UNASSIGNED13	    2	     100 UNASSIGNED14	    2	     100 UNASSIGNED
8 rows selected.
2、在主库删除 standby log

删除旧的 standby loggroup: 7-14):

alter database drop logfile group 7;
alter database drop logfile group 8;
alter database drop logfile group 9;
alter database drop logfile group 10;
alter database drop logfile group 11;
alter database drop logfile group 12;
alter database drop logfile group 13;
alter database drop logfile group 14;
3、在主库替换 redo log

步骤1:添加两组临时日志文件:

节点1alter database add logfile thread 1 group 14 '+DATA' size 1024M;
alter database add logfile thread 1 group 15 '+DATA' size 1024M;节点2alter database add logfile thread 2 group 16 '+DATA' size 1024M;
alter database add logfile thread 2 group 17 '+DATA' size 1024M;SQL> select group#,thread#,sequence#,round(bytes/1024/1024,2) size_mb,members,archived,status,first_change#,first_time from v$log;GROUP#    THREAD#  SEQUENCE#    SIZE_MB    MEMBERS ARC STATUS	    FIRST_CHANGE# FIRST_TIME
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ------------1	    2	  114484	100	     1 YES ACTIVE	       9058578050 03-JAN-252	    2	  114485	100	     1 YES ACTIVE	       9058584300 03-JAN-253	    2	  114483	100	     1 YES ACTIVE	       9058554867 03-JAN-254	    1	  324142	100	     1 YES ACTIVE	       9058579106 03-JAN-255	    1	  324140	100	     1 YES INACTIVE	       9058515833 03-JAN-256	    1	  324141	100	     1 YES ACTIVE	       9058538032 03-JAN-2514	    1	  324143       1024	     1 NO  CURRENT	       9058623959 03-JAN-2515	    1	       0       1024	     1 YES UNUSED			016	    2	  114486       1024	     1 NO  CURRENT	       9058621526 03-JAN-2517	    2	       0       1024	     1 YES UNUSED			010 rows selected.

步骤2:切日志,让旧的日志文件(group: 1-6)组为INACTIVE

alter system switch logfile;
alter system checkpoint;select group#,thread#,sequence#,round(bytes/1024/1024,2) size_mb,members,archived,status,first_change#,first_time from v$log;SQL> select group#,thread#,sequence#,round(bytes/1024/1024,2) size_mb,members,archived,status,first_change#,first_time from v$log;GROUP#    THREAD#  SEQUENCE#    SIZE_MB    MEMBERS ARC STATUS	    FIRST_CHANGE# FIRST_TIME
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ------------1	    2	  114484	100	     1 YES INACTIVE	       9058578050 03-JAN-252	    2	  114485	100	     1 YES INACTIVE	       9058584300 03-JAN-253	    2	  114483	100	     1 YES INACTIVE	       9058554867 03-JAN-254	    1	  324142	100	     1 YES INACTIVE	       9058579106 03-JAN-255	    1	  324140	100	     1 YES INACTIVE	       9058515833 03-JAN-256	    1	  324141	100	     1 YES INACTIVE	       9058538032 03-JAN-2514	    1	  324143       1024	     1 YES INACTIVE	       9058623959 03-JAN-2515	    1	  324144       1024	     1 NO  CURRENT	       9058634615 03-JAN-2516	    2	  114486       1024	     1 NO  CURRENT	       9058621526 03-JAN-2517	    2	       0       1024	     1 YES UNUSED			010 rows selected.

步骤3:删除1-6组日志文件:

alter database drop logfile group 1;
alter database drop logfile group 2;
alter database drop logfile group 3;
alter database drop logfile group 4;
alter database drop logfile group 5;
alter database drop logfile group 6;SQL> select group#,thread#,sequence#,round(bytes/1024/1024,2) size_mb,members,archived,status,first_change#,first_time from v$log;GROUP#    THREAD#  SEQUENCE#    SIZE_MB    MEMBERS ARC STATUS	    FIRST_CHANGE# FIRST_TIME
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ------------14	    1	  324143       1024	     1 YES INACTIVE	       9058623959 03-JAN-2515	    1	  324144       1024	     1 NO  CURRENT	       9058634615 03-JAN-2516	    2	  114486       1024	     1 NO  CURRENT	       9058621526 03-JAN-2517	    2	       0       1024	     1 YES UNUSED			0

步骤4:添加新的日志文件:

/*
SQL> select * from v$logfile;GROUP# STATUS  TYPE MEMBER   IS_
-----------------------------------------------------------------------------------1	   ONLINE   +DATA/hisdb/onlinelog/redo01.log  NO2	   ONLINE   +DATA/hisdb/onlinelog/redo02.log  NO4	   ONLINE   +DATA/hisdb/onlinelog/redo04.log  NO5	   ONLINE   +DATA/hisdb/onlinelog/redo05.log  NO6	   ONLINE   +DATA/hisdb/onlinelog/redo06.log  NO3	   ONLINE   +DATA/hisdb/onlinelog/redo03.log  NO7	   STANDBY  +DATA/hisdb/onlinelog/group_7.446.1121009477   NO8	   STANDBY  +DATA/hisdb/onlinelog/group_8.447.1121009483   NO9	   STANDBY  +DATA/hisdb/onlinelog/group_9.448.1121009489   NO10	   STANDBY  +DATA/hisdb/onlinelog/group_10.449.1121009493  NO11	   STANDBY  +DATA/hisdb/onlinelog/group_11.450.1121009499  NO12	   STANDBY  +DATA/hisdb/onlinelog/group_12.451.1121009507  NO13	   STANDBY  +DATA/hisdb/onlinelog/group_13.452.1121009507  NO14	   STANDBY  +DATA/hisdb/onlinelog/group_14.453.1121009507  NO
14 rows selected.
*/alter database add logfile thread 2 group 1 '+DATA/hisdb/onlinelog/redo1.log' size 1024M;
alter database add logfile thread 2 group 2 '+DATA/hisdb/onlinelog/redo2.log' size 1024M;
alter database add logfile thread 2 group 3 '+DATA/hisdb/onlinelog/redo3.log' size 1024M;
alter database add logfile thread 1 group 4 '+DATA/hisdb/onlinelog/redo4.log' size 1024M;
alter database add logfile thread 1 group 5 '+DATA/hisdb/onlinelog/redo5.log' size 1024M;
alter database add logfile thread 1 group 6 '+DATA/hisdb/onlinelog/redo6.log' size 1024M;SQL> select group#,thread#,sequence#,round(bytes/1024/1024,2) size_mb,members,archived,status,first_change#,first_time from v$log;GROUP#    THREAD#  SEQUENCE#    SIZE_MB    MEMBERS ARC STATUS	    FIRST_CHANGE# FIRST_TIME
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ------------1	    2	       0       1024	     1 YES UNUSED			02	    2	       0       1024	     1 YES UNUSED			03	    2	       0       1024	     1 YES UNUSED			04	    1	       0       1024	     1 YES UNUSED			05	    1	       0       1024	     1 YES UNUSED			06	    1	       0       1024	     1 YES UNUSED			014	    1	  324143       1024	     1 YES INACTIVE	       9058623959 03-JAN-2515	    1	  324144       1024	     1 NO  CURRENT	       9058634615 03-JAN-2516	    2	  114486       1024	     1 NO  CURRENT	       9058621526 03-JAN-2517	    2	       0       1024	     1 YES UNUSED			010 rows selected.

步骤5:切日志,删除临时添加的日志文件:

alter system switch logfile;
alter system checkpoint;SQL> select group#,thread#,sequence#,round(bytes/1024/1024,2) size_mb,members,archived,status,first_change#,first_time from v$log;GROUP#    THREAD#  SEQUENCE#    SIZE_MB    MEMBERS ARC STATUS	    FIRST_CHANGE# FIRST_TIME
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ------------1	    2	  114487       1024	     1 NO  CURRENT	       9058754188 03-JAN-252	    2	       0       1024	     1 YES UNUSED			03	    2	       0       1024	     1 YES UNUSED			04	    1	  324145       1024	     1 YES INACTIVE	       9058753823 03-JAN-255	    1	  324146       1024	     1 NO  CURRENT	       9058759093 03-JAN-256	    1	       0       1024	     1 YES UNUSED			014	    1	  324143       1024	     1 YES INACTIVE	       9058623959 03-JAN-2515	    1	  324144       1024	     1 YES INACTIVE	       9058634615 03-JAN-2516	    2	  114486       1024	     1 YES INACTIVE	       9058621526 03-JAN-2517	    2	       0       1024	     1 YES UNUSED			010 rows selected.alter database drop logfile group 14;
alter database drop logfile group 15;
alter database drop logfile group 16;
alter database drop logfile group 17;SQL> select group#,thread#,sequence#,round(bytes/1024/1024,2) size_mb,members,archived,status,first_change#,first_time from v$log;GROUP#    THREAD#  SEQUENCE#    SIZE_MB    MEMBERS ARC STATUS	    FIRST_CHANGE# FIRST_TIME
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ------------1	    2	  114487       1024	     1 NO  CURRENT	       9058754188 03-JAN-252	    2	       0       1024	     1 YES UNUSED			03	    2	       0       1024	     1 YES UNUSED			04	    1	  324145       1024	     1 YES INACTIVE	       9058753823 03-JAN-255	    1	  324146       1024	     1 NO  CURRENT	       9058759093 03-JAN-256	    1	       0       1024	     1 YES UNUSED			06 rows selected.
4、主库添加 standby log
Alter database add standby logfile thread 1 group 7 '+DATA/hisdb/onlinelog/standby07.log' size 1024m;
Alter database add standby logfile thread 1 group 8 '+DATA/hisdb/onlinelog/standby08.log' size 1024m;
Alter database add standby logfile thread 1 group 9 '+DATA/hisdb/onlinelog/standby09.log' size 1024m;
Alter database add standby logfile thread 1 group 10 '+DATA/hisdb/onlinelog/standby10.log' size 1024m;
Alter database add standby logfile thread 2 group 11 '+DATA/hisdb/onlinelog/standby11.log' size 1024m;
Alter database add standby logfile thread 2 group 12 '+DATA/hisdb/onlinelog/standby12.log' size 1024m;
Alter database add standby logfile thread 2 group 13 '+DATA/hisdb/onlinelog/standby13.log' size 1024m;
Alter database add standby logfile thread 2 group 14 '+DATA/hisdb/onlinelog/standby14.log' size 1024m;

四、在备库上重建 standby log

步骤1:查看 redo 日志组及大小

查看日志文件信息

SQL> select * from v$logfile;GROUP# STATUS  TYPE   MEMBER   IS_
------------------------------------------------------------------------1	   ONLINE  /oradata/dghisdb/onlinelog/redo01.log  NO2	   ONLINE  /oradata/dghisdb/onlinelog/redo02.log  NO4	   ONLINE  /oradata/dghisdb/onlinelog/redo04.log  NO5	   ONLINE  /oradata/dghisdb/onlinelog/redo05.log  NO6	   ONLINE  /oradata/dghisdb/onlinelog/redo06.log  NO3	   ONLINE  /oradata/dghisdb/onlinelog/redo03.log  NO7	   STANDBY /oradata/dghisdb/onlinelog/group_7.446.1121009477 NO8	   STANDBY /oradata/dghisdb/onlinelog/group_8.447.1121009483 NO9	   STANDBY /oradata/dghisdb/onlinelog/group_9.448.1121009489 NO10	   STANDBY /oradata/dghisdb/onlinelog/group_10.449.1121009493 NO11	   STANDBY /oradata/dghisdb/onlinelog/group_11.450.1121009499 NO12	   STANDBY /oradata/dghisdb/onlinelog/group_12.451.1121009507 NO13	   STANDBY /oradata/dghisdb/onlinelog/group_13.452.1121009507 NO14	   STANDBY /oradata/dghisdb/onlinelog/group_14.453.1121009507 NO14 rows selected.SQL> select group#,thread#,round(bytes/1024/1024,2) size_mb,status from v$standby_log;GROUP#    THREAD#	 SIZE_MB STATUS
---------- ---------- ---------- ----------7	    1	     100 UNASSIGNED8	    1	     100 UNASSIGNED9	    1	     100 UNASSIGNED10	    1	     100 UNASSIGNED11	    2	     100 UNASSIGNED12	    2	     100 UNASSIGNED13	    2	     100 UNASSIGNED14	    2	     100 UNASSIGNED8 rows selected.
步骤2:备库取消日志应用
alter database recover managed standby database cancel;
步骤3:修改日志管理模式为手动
SQL> alter system set standby_file_management='manual';SQL> show parameter standby_file_managementNAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
standby_file_management              string      manual
步骤4:删除备库 standby log

(1)添加两组临时standby log file

alter database add standby logfile group 15 '/oradata/dghisdb/onlinelog/stlog15.log' size 1024m reuse;
alter database add standby logfile group 16 '/oradata/dghisdb/onlinelog/stlog16.log' size 1024m reuse;

(2)在主库切日志,将active状态切到临时文件上,将所有旧的standby logstatus刷到UNASSIGNED

SQL> alter system switch logfile;
--备库查看
SQL> select group#,thread#,round(bytes/1024/1024,2) size_mb,status from v$standby_log;

(3)删除旧的standby log file

alter database drop logfile group 7;
alter database drop logfile group 8;
alter database drop logfile group 9;
alter database drop logfile group 10;
alter database drop logfile group 11;
alter database drop logfile group 12;
alter database drop logfile group 13;
alter database drop logfile group 14;
步骤5:备库新建 standby log

重新添加standby log,大小为1024mgroup: 7-14

alter database add standby logfile group 7 '/oradata/dghisdb/onlinelog/standby07.log' size 1024m reuse;
alter database add standby logfile group 8 '/oradata/dghisdb/onlinelog/standby08.log' size 1024m reuse;
alter database add standby logfile group 9 '/oradata/dghisdb/onlinelog/standby09.log' size 1024m reuse;
alter database add standby logfile group 10 '/oradata/dghisdb/onlinelog/standby10.log' size 1024m reuse;
alter database add standby logfile group 11 '/oradata/dghisdb/onlinelog/standby11.log' size 1024m reuse;
alter database add standby logfile group 12 '/oradata/dghisdb/onlinelog/standby12.log' size 1024m reuse;
alter database add standby logfile group 13 '/oradata/dghisdb/onlinelog/standby13.log' size 1024m reuse;
alter database add standby logfile group 14 '/oradata/dghisdb/onlinelog/standby14.log' size 1024m reuse;
步骤6:主库切日志,将临时standby logfile切到UNASSIGNED状态,然后删除:
alter database drop logfile group 15;
alter database drop logfile group 16;  

说明:关于备库的 redo log 处理

通常情况下,备库为只读模式,不对数据库进行修改,不会启用redo log files

另外,由于 DG 备库处于只读模式,因此,不对备库的 redo log 做任何操作。

步骤7:打开实时应用日志和自动日志管理

完成以上操作后,dg 环境的redo logfilestandby logfile就算更新完成了,接下来只需要恢复数据同步即可。

alter database recover managed standby database using current logfile disconnect;alter system set standby_file_management='AUTO';
步骤8:重新启动备库,查看数据同步情况
SQL> select process ,status , sequence# from v$managed_standby;PROCESS   STATUS	SEQUENCE#
--------- ------------ ----------
ARCH	  CLOSING	   324152
ARCH	  CONNECTED		0
ARCH	  CONNECTED		0
ARCH	  CLOSING	   114489
RFS	  IDLE			0
RFS	  IDLE			0
RFS	  IDLE		   114490
RFS	  IDLE			0
RFS	  IDLE		   324153
RFS	  IDLE			0
MRP0	  APPLYING_LOG	   32415311 rows selected.SQL> select group#,thread#,sequence#,archived,status from v$standby_log;GROUP#    THREAD#  SEQUENCE# ARC STATUS
---------- ---------- ---------- --- ----------7	    2	  114490 YES ACTIVE8	    2	       0 NO  UNASSIGNED9	    1	  324153 YES ACTIVE10	    1	       0 NO  UNASSIGNED11	    0	       0 YES UNASSIGNED12	    0	       0 YES UNASSIGNED13	    0	       0 YES UNASSIGNED14	    0	       0 YES UNASSIGNED8 rows selected.

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/502698.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

在大型语言模型LLM中使用私有数据

目录 一、说明 二、训练? 三、及时工程 四、构建系统提示 五、数据人性化 六、我的数据安全吗? 一、说明 随着 2023 年大型语言模型的大规模兴起,许多“基于对话”的服务应运而生,使用户能够通过自然对话与数据和其他产品进行交互…

字玩FontPlayer开发笔记6 Tauri2设置菜单

字玩FontPlayer开发笔记6 Tauri2设置菜单 字玩FontPlayer是笔者开源的一款字体设计工具,使用Vue3 ElementUI开发,源代码: github: https://github.com/HiToysMaker/fontplayer gitee: https://gitee.com/toysmaker/fontplayer 笔记 字玩目…

Chapter4.1 Coding an LLM architecture

文章目录 4 Implementing a GPT model from Scratch To Generate Text4.1 Coding an LLM architecture 4 Implementing a GPT model from Scratch To Generate Text 本章节包含 编写一个类似于GPT的大型语言模型(LLM),这个模型可以被训练来生…

linux-centos-安装miniconda3

参考: 最新保姆级Linux下安装与使用conda:从下载配置到使用全流程_linux conda-CSDN博客 https://blog.csdn.net/qq_51566832/article/details/144113661 Linux上删除Anaconda或Miniconda的步骤_linux 删除anaconda-CSDN博客 https://blog.csdn.net/m0_…

Speech Recognition vs. Voice Recognition | 语音识别工作原理 | 模型训练 | 应用

注:机翻,未校。 Speech Recognition 与 Voice Recognition 剑桥词典 speech recognition,语音识别 voice recognition,声音识别 Speech vs. Voice - What’s the Difference? | This vs. That https://thisvsthat.io/speech-vs…

外网访问本地部署的 VMware ESXi 服务

本文将详细的介绍如何在本地部署的 VMware ESXi 以及结合路由侠内网穿透技术,实现外网远程访问和管理本地 ESXi 服务器的具体步骤和配置方法。 第一步,本地部署 VMware ESXi 1,先去官网下载 ESXI :网址:Home - Suppor…

如何配置【Docker镜像】加速器+【Docker镜像】的使用

一、配置Docker镜像加速器 1. 安装/升级容器引擎客户端​ 推荐安装1.11.2以上版本的容器引擎客户端 2. 配置镜像加速器​ 针对容器引擎客户端版本大于1.11.2的用户 以root用户登录容器引擎所在的虚拟机 修改 "/etc/docker/daemon.json" 文件(如果没有…

基于Spring Boot的车辆违章信息管理系统(LW+源码+讲解)

专注于大学生项目实战开发,讲解,毕业答疑辅导,欢迎高校老师/同行前辈交流合作✌。 技术范围:SpringBoot、Vue、SSM、HLMT、小程序、Jsp、PHP、Nodejs、Python、爬虫、数据可视化、安卓app、大数据、物联网、机器学习等设计与开发。 主要内容:…

如何提高软件研发效率?

如何提高软件研发效率? 概述 莫等闲,白了少年头,空悲切。近些年来在大家眼里形成了“卷王”的印象,第一次听到这一的评价感觉有点不好意思,之后回想感觉挺自豪的,说明现在的我没有浪费光阴,我一…

jenkins入门--安装jenkins

下载地址https://www.jenkins.io/ jdk 安装 :Jenkins需要安装对应版本的jdk,我在安装过程中显示需要21,17 Java Downloads | Oracle jenkins安装过程参考全网最清晰Jenkins安装教程-windows_windows安装jenkins-CSDN博客 安装完成后,浏览器输入127.0.…

单片机-独立按键矩阵按键实验

1、按键介绍 按键管脚两端距离长的表示默认是导通状态,距离短的默认是断开状态, 如果按键按下,初始导通状态变为断开,初始断开状态变为导通 我们开发板是采用软件消抖,一般来说一个简单的按键消抖就是先读取按键的状…

一文详解YOLOv8多模态目标检测(可见光+红外图像,基于Ultralytics官方代码实现),轻松入门多模态检测领域!

目录 1. 文章主要内容2. 相关说明3. 基于YOLOv8的多模态目标检测3.1 启动运行YOLOv8多模态代码3.2 详解代码流程(重点)3.2.1 train.py文件(入口)3.2.2 engine\model.py文件3.2.3 engine\trainer.py文件3.2.4 models\yolo\detect\t…

【顶刊TPAMI 2025】多头编码(MHE)之极限分类 Part 3:算法实现

目录 1 三种多头编码(MHE)实现1.1 多头乘积(MHP)1.2 多头级联(MHC)1.3 多头采样(MHS)1.4 标签分解策略 论文:Multi-Head Encoding for Extreme Label Classification 作者…

【AWS SDK PHP】This operation requests `sigv4a` auth schemes 问题处理

使用AWS SDK碰到的错误,其实很简单,要装个扩展库 保持如下 Fatal error: Uncaught Aws\Auth\Exception\UnresolvedAuthSchemeException: This operation requests sigv4a auth schemes, but the client currently supports sigv4, none, bearer, sigv4-…

LLM - 使用 LLaMA-Factory 部署大模型 HTTP 多模态服务 教程 (4)

欢迎关注我的CSDN:https://spike.blog.csdn.net/ 本文地址:https://spike.blog.csdn.net/article/details/144881432 大模型的 HTTP 服务,通过网络接口,提供 AI 模型功能的服务,允许通过发送 HTTP 请求,交互大模型,通常基于云计算架构,无需在本地部署复杂的模型和硬件,…

【MATLAB】【Simulink仿真】向模型中添加自定义子系统

一、子系统的创建 1、启动Simulink,选择【新建】——【空白子系统】——【创建子系统】 2、选择【浏览组件库】,创建使能子系统。 3、保存至当前工作目录。 二、建立模型仿真 1、启动Simulink,选择【新建】——【空白子系统】——【创建子系…

HTML——56.表单发送

<!DOCTYPE html> <html><head><meta charset"UTF-8"><title>表单发送</title></head><body><!--注意&#xff1a;1.表单接收程序&#xff0c;放在服务器环境中(也就是这里的www文件目录中)2.表单发送地址&#x…

123.【C语言】数据结构之快速排序挖坑法和前后指针法

目录 1.挖坑法 执行流程 代码 运行结果 可读性好的代码 2.前后指针法(双指针法) 执行流程 单趟排序代码 将单趟排序代码改造后 写法1 简洁的写法 3.思考题 1.挖坑法 执行流程 "挖坑法"顾名思义:要有坑位,一开始将关键值放入临时变量key中,在数组中形成…

重庆大学软件工程复试怎么准备?

重大软件复试相对来说不算刁钻&#xff0c;关键是对自己的竞赛和项目足够了解&#xff0c;能应对老师的提问。专业课范围广&#xff0c;英文文献看个人水平&#xff0c;难度不算大&#xff0c;整体只要表现得得体从容&#xff0c;以及充分的准备&#xff0c;老师不会为难你。 …

【Rust自学】10.3. trait Pt.1:trait的定义、约束与实现

喜欢的话别忘了点赞、收藏加关注哦&#xff0c;对接下来的教程有兴趣的可以关注专栏。谢谢喵&#xff01;(&#xff65;ω&#xff65;) 题外话&#xff1a;trait的概念非常非常非常重要&#xff01;&#xff01;&#xff01;整个第10章全都是Rust的重难点&#xff01;&#x…