Cloning a Schema With One Line
In this brief article, let's see how to clone a schema with one line.
Join the DZone community and get the full member experience.
Join For FreeIn the world of DevOps, continuous integration, and repeatable test cases, the demand for being able to quickly build a suite of database objects, utilize it for a series of tests, then throw the objects away has become far more common.
You might also be interested in: The Glory of Cloned Databases
This is one of the many great use cases for pluggable databases with all of the powerful cloning facilities available. In particular, now that you can take advantage of pluggable databases without* incurring additional license fees, there are some great opportunities there…but that is a topic for another post.
What about if the “unit of work” is not an entire pluggable database? What if we just want to clone just a schema within a pluggable database? Gone are the days where a DBA might be asked to clone a schema once per month, or even once per week. Cloning a schema is now something the developers want to do multiple times per day, as part of an automated process, and not have to involve anyone at all! Welcome to DevOps!
Unfortunately, we do not yet have a command in the Oracle database that lets you run:
create user SCOTT2 from SCOTT;
...so I thought I’d throw something together, which hopefully is the next best thing. We can use the PLSQL API into the Datapump feature to facilitate this. Taking advantage of a few concepts, namely:
- Network-based import
- The implicit database link associated with a global name
- Dynamically alterable external table definitions
We can build a PL/SQL procedure that is our one line clone schema resource for developers.
--
-- if you want to allow the drop user option, then the
-- the owning schema will need the following privilege
--
-- Needless to say, you might want to wrap this within a procedure
-- within its own rights to ensure people don't drop the WRONG user
--
-- For example:
--
-- create or replace
-- procedure customised_drop_user(p_user varchar2) is
-- begin
-- if .... then
-- execute immediate 'drop user '||p_user||' cascade';
-- else
-- raise_application_error(-20000,'What the hell?!?!?');
-- end if;
-- end;
--
grant drop user to MY_USER;
drop table datapump_clone_log;
--
-- the initial file in the definition (dummy.log) must
-- exist, and the directory you are using (TEMP) must match
-- the declaration in the PLSQL proc which follows
--
create table datapump_clone_log (
msg varchar2(4000)
)
organization external
( type oracle_loader
default directory TEMP
access parameters
( records delimited by newline
fields terminated by ','
missing field values are null
( msg )
)
location ('dummy.log')
) reject limit unlimited;
--
-- p_old = existing schema
-- p_new = target schema
-- p_drop = whether we drop the target schema first
-- p_asynch = whether we wait or simply launch the import and return
--
-- I'd recommend p_asynch as false, because in that way, you'll get the
-- import log returned right back to your screen
--
create or replace
procedure clone_schema(
p_old varchar2,
p_new varchar2,
p_drop_new boolean default true,
p_asynch boolean default false
) is
l_handle number;
l_status ku$_status;
l_state varchar2(30);
l_link varchar2(128);
l_job_name varchar2(128) := upper(p_old)||'_SCHEMA_IMP';
l_log_file varchar2(128) := lower(p_old)||'_import.log';
l_default_dir varchar2(128) := 'TEMP';
rc sys_refcursor;
l_msg varchar2(4000);
procedure info(m varchar2,p_dbms_out boolean default false) is
begin
dbms_application_info.set_client_info(to_char(sysdate,'hh24miss')||':'||m);
if p_dbms_out then
dbms_output.put_line(to_char(sysdate,'hh24miss')||':'||m);
end if;
end;
BEGIN
if p_drop_new then
begin
info('Dropping '||p_new,p_dbms_out=>true);
--
-- See notes about potentially wrapping this for safety
--
execute immediate 'drop user '||p_new||' cascade';
exception
when others then
if sqlcode != -1918 then raise; end if;
end;
end if;
select global_name into l_link from global_name;
l_handle := dbms_datapump.open(
operation => 'IMPORT',
job_mode => 'SCHEMA',
remote_link => l_link,
job_name => l_job_name);
dbms_datapump.add_file(
handle => l_handle,
filename => l_log_file,
directory => l_default_dir,
filetype => dbms_datapump.ku$_file_type_log_file,
reusefile => 1);
dbms_datapump.metadata_filter(
handle => l_handle,
name => 'SCHEMA_EXPR',
value => '= '''||p_old||'''');
dbms_datapump.metadata_remap(
handle => l_handle,
name => 'REMAP_SCHEMA',
old_value => p_old,
value => p_new);
info('Starting job',p_dbms_out=>true);
dbms_datapump.start_job(l_handle);
if not p_asynch then
loop
begin
dbms_lock.sleep(3);
dbms_datapump.get_status(
handle => l_handle,
mask => dbms_datapump.ku$_status_job_status,
job_state => l_state,
status => l_status);
info('l_state='||l_state);
exception
when others then
if sqlcode = -31626 then
l_state := 'COMPLETED';
else
raise;
end if;
end;
exit when (l_state = 'COMPLETED') or (l_state = 'STOPPED');
end loop;
info('Final state:'||l_state,p_dbms_out=>true);
end if;
dbms_datapump.detach(l_handle);
if not p_asynch then
open rc for 'select msg from datapump_clone_log external modify ( location ( '''||l_log_file||''' ) )';
loop
fetch rc into l_msg;
exit when rc%notfound;
dbms_output.put_line(l_msg);
end loop;
close rc;
end if;
end;
/
sho err
You can also get the source from my repo here.
Now let's have a look at the routine in action. This is from my 18c database:
SQL> set serverout on
SQL> exec clone_schema('SCOTT','SCOTT2');
172055:Dropping SCOTT2
172057:Starting job
172232:Final state:COMPLETED
Starting "MCDONAC"."SCOTT_SCHEMA_IMP":
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 184.1 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . imported "SCOTT2"."BIGT" 1146660 rows
. . imported "SCOTT2"."DEPT" 4 rows
. . imported "SCOTT2"."EMP" 14 rows
. . imported "SCOTT2"."SALGRADE" 5 rows
. . imported "SCOTT2"."BONUS" 0 rows
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
ORA-39082: Object type PROCEDURE:"SCOTT2"."BLAH2" created with compilation warnings
ORA-39082: Object type PROCEDURE:"SCOTT2"."BLAH" created with compilation warnings
Job "MCDONAC"."SCOTT_SCHEMA_IMP" completed with 2 error(s) at Wed Nov 6 17:21:29 2019 elapsed 0 00:00:33
PL/SQL procedure successfully completed.
Note: If you want to run this on a version of the database below 18c, you can simply break the dynamic external table alteration into an ALTER statement to change the location, and then just query the external table as per normal. The rest of the code should work without alteration.
Enjoy!
Further Reading
Deep Clone Collection Objects in Java [Snippets]
Deploying and Reverting Clones for Database Development and Testing
Published at DZone with permission of Connor McDonald, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments