Atikh's DBA blog
  • Home
  • Oracle
  • MySQL
  • MongoDB
  • PostgreSQL
  • Snowflake
  • About Me
  • Contact Us

Oracle Database Schema Refresh

 Atikh Shaikh     oracle, Schema refresh     2 comments   

Schema refresh is regular activity in DBA’s life that needs to carried out carefully as most of the time production data is getting copied in non-prod environments

Definition: Copying data from one database schema to other database schema or copying data from one schema to other schema in same database. Before proceeding we should be aware of below details

Source DB, schema and server name–> DB: prod1 schema: prod1_data prod server->prod_srvr
Target DB, schema and server name –> DB: test1 schema: test1_data test server->test_srvr

We will describe schema refresh with different cases depending on versions of source and target database

CASE I: Databases with same version >=10g

Step 1: Take export backup using datapump method of source and target database

Create physical directory on server and logical directory in database; make sure we have sufficient space on server to take backups

mkdir -p /backup/db
CREATE OR REPLACE DIRECTORY datapump_dir AS '/backup/db';
expdp userid=prod11g/Prod#342 directory=datapump_dir dumpfile=expdp_prod1_data.dmp, logfile=expdp_prod1_data.log schemas=prod1_data

Once backup is completed check logfile for any errors, Perform same steps on target database if target schema exists

mkdir -p /backup/db
CREATE OR REPLACE DIRECTORY datapump_dir AS '/backup/db';

expdp userid=test11g/Test#342 directory=datapump_dir dumpfile=expdp_test1_data.dmp, logfile=expdp_test1_data.log schemas=test1_data

Identify tablespaces associated with schema in source and target database

select distinct tablespace_name from dba_segments where owner='PROD1_DATA';

select distinct tablespace_name from dba_segments where owner='TEST1_DATA';

Step 2: Transfer source dumpfile to target dumpfile using SCP utility as below

scp expdp_prod1_data.dmp username@test_srvr:/backup/db

Step 3: Drop schema in target database which needs to be refreshed and import source dumpfile into target database

drop user test1_data cascade;
CREATE OR REPLACE DIRECTORY import_dir AS '/backup/db';

impdp userid=test11g/Test#342 directory=import_dir dumpfile=expdp_prod1_data.dmp logfile=impdp_test1_data.log remap_schema=PROD1_DATA:TEST1_DATA remap_tablespace=<source tablespace>:<target tablespace>

Note: If name of source schema and target schema is same then remap_schema parameter is not required.

Verify the object count in source and target database

Select OWNER, OBJECT_TYPE, count (object_name) from dba_objects where OWNER='TEST1_DATA' group by OWNER, OBJECT_TYPE;

Compile invalid objects using below script

select 'alter '||object_type||' ' ||owner||'.'||object_name||' compile;' from dba_objects where status='INVALID' and owner in ('TEST1_DATA');

CASE 2: Source DB version lower than target DB version (>=10g)

Datapump support this type of refresh without any issues. Follow same process as Case 1

CASE 3: Databases with version lower than 10g

In such cases we need to use traditional export-import method

Step 1: Take exp backup of source and target database schemas (target if required)

exp userid=prod9i/Prod#234 file=expdp_PROD1_DATA.dmp log=expdp_PROD1_DATA.log owner=expdp_PROD1_DATA compress=N consistent=Y statistics=none buffer=20000000

Step 2: Copy dump file to target location using scp as above mentioned.

Step 3: Take backup of create user statement in target database before dropping user then drop user and run create user command as imp do not

Step 4: If associated tablespaces are different in source and target database then generate indexfile (i.e. DDL script) using below command

imp userid=test9i/Test#342 file=expdp_PROD1_DATA.dmp indexfile=file_ddl.sql

Once indexfile is generated replace all tablespace names with new tablespace name as per in target and run script in order to create objects under schema

Step 5: Import the source dump in target

imp userid=” '/ as sysdba' ” file=expdp_PROD1_DATA.dmp, log=imp_expdp_TEST1_DATA.log fromuser=PROD1_DATA touser=TEST1_DATA ignore=y

Verify objects in source and target and compile if required
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit
Email ThisBlogThis!Share to XShare to Facebook

Related Posts:

  • Oracle 12c: Starting and Stopping PDB In this article we will learn how to start and stop 12c database having multitenant databases. For shutting down and starting container database, met… Read More
  • User Managed Backups in Oracle Definition :Backup is real and consistent copy of data from database that could be used to reconstruct the data after and incident. There are t… Read More
  • Oracle : RMAN Introduction In this article we will discuss RMAN - Recovery Manager in detail with its architecture, uses and different scenarios of RMAN backup and recovery. Th… Read More
  • Warning: PDB altered with errors- opening PDB in Oracle 12.1 I was facing below error while opening pluggable database created on 12.1.0.2 using SEED PDB SQL> show pdbs     CON_ID CON_NAME&nbs… Read More
  • Everything you need to know about Oracle Data Pump In this article we are going to discuss everything about Data Pump utility provided by oracle from 10g version with no extra cost.We are going to dis… Read More
Newer Post Older Post Home

2 comments:

  1. Anonymous26 January 2021 at 19:52

    thx Atikh

    ReplyDelete
    Replies
      Reply
  2. tradeimexinfo17 May 2022 at 10:05

    Your blog is very valuable which you have shared here about india import data I appreciate the efforts which you have put into this blog and also it is a gainful blog for us. Thank you for sharing this here.

    ReplyDelete
    Replies
      Reply
Add comment
Load more...

Author

Atikh Shaikh
View my complete profile

Categories

  • MongoDB (18)
  • Oracle 12c (30)
  • Oracle12cR2 New Feature (3)
  • PostgreSQL (20)
  • RMAN (10)
  • Snowflake (8)
  • mysql (23)
  • oracle (74)

Blog Archive

  • ▼  2018 (38)
    • ▼  November (25)
      • Introduction On MySQL
      • SQL Modes in MySQL
      • MySQL Version explained
      • MySQL Storage Engines (Part 1)
      • MySQL Storage Engines (Part 2)
      • Login to MySQL Database on Linux
      • Error Code: 1419. You do not have the SUPER privil...
      • Create MySQL database on Linux
      • Create New User in MySQL Database
      • MySQL Default database | MySQL System Database
      • Oracle ASM Basic Commands
      • Oracle Database Schema Refresh
      • Create User in Oracle Database Pre-12c and 12c
      • RMAN Disk backup and List or Report RMAN Backup
      • RMAN Backup of Single Datafile and List Backup
      • Difference Between User, Service account and Schem...
      • Create MySQL Database Backup | mysqldump mysqlbackup
      • Switching between ARCHIVELOG Mode and NOARCHIVELOG...
      • Introduction to MongoDB
      • Introduction to PostgreSQL
      • Installation of PostgreSQL 9.6 on Windows
      • Installation of MongoDB on Windows
      • Oracle DBA Scripts
      • Databases, Collections and Documents in MongoDB
      • CRUD (Create, Read, Update, Delete) Operations in ...
    • ►  December (13)
  • ►  2019 (33)
    • ►  January (15)
    • ►  February (6)
    • ►  March (2)
    • ►  April (5)
    • ►  May (5)
  • ►  2020 (5)
    • ►  April (1)
    • ►  May (2)
    • ►  July (2)
  • ►  2021 (8)
    • ►  June (3)
    • ►  July (3)
    • ►  August (1)
    • ►  December (1)
  • ►  2022 (33)
    • ►  May (3)
    • ►  June (10)
    • ►  July (3)
    • ►  August (4)
    • ►  September (8)
    • ►  October (3)
    • ►  November (2)
  • ►  2023 (14)
    • ►  February (1)
    • ►  April (5)
    • ►  May (2)
    • ►  June (1)
    • ►  September (1)
    • ►  October (1)
    • ►  December (3)
  • ►  2024 (5)
    • ►  January (2)
    • ►  March (3)
  • ►  2025 (6)
    • ►  March (1)
    • ►  April (3)
    • ►  May (2)

Popular Posts

  • ORA-29283: invalid file operation: unexpected "LFI" error (1509)[29437]
    I was trying to export the schema in my windows PC, it got stuck with below error    C:\Users\shaik\Videos\technodba exp>expdp userid...
  • ERROR 1221 (HY000): Incorrect usage of DB GRANT and GLOBAL PRIVILEGES
    In previous articles, we have learned about user creation and grants  in MySQL in detail, but there are a few privileges called global priv...
  • Oracle Dataguard Broker Configuration (DGMGRL)
    Data Guard Broker is a command-line interface that makes managing primary and standby databases easy. DBA can use a single command to switch...
  • Oracle 23ai : The all new Hybrid Read-Only for pluggable databases (PDBs)
      The latest Oracle database version, Oracle 23ai, introduced a new open mode called Hybrid Read-Only for pluggable databases (PDBs). Local ...
  • PostgreSQL : How to get data directory location for PostgreSQL instance
    Sometimes, you start working on a PostgreSQL instance but forget about the data directory, here we will discuss different methods to know th...

Labels

oracle Oracle 12c mysql PostgreSQL MongoDB oracle 19c Oracle23c oracle19c Orale PDB-CDB oracle12c python AWS Oracle ASM Virtualbox pluggable database storage engine

Pages

  • Disclaimer
  • Privacy Policy

Follow TechnoDBA

Copyright © 2025 Atikh's DBA blog | Powered by Blogger