Quantcast
Channel: sqlcl | ThatJeffSmith
Viewing all 125 articles
Browse latest View live

SQLcl and the the ORDS JDBC Driver

$
0
0

You know what SQLcl is.

You know what ORDS is.

How do those two things go together?

Well, in ORDS 17.3 (Early Adopter/BETA!), we offer a new feature that allows you to execute ad hoc SQL and SQL scripts via HTTPS POST calls to ORDS.

No access to the database via the Listener required.

Wanna play?

Maybe the easiest way is to connect to ORDS with SQLcl – using a JDBC driver we built that ‘talks REST.’

  1. Get ORDS 17.3
  2. Configure it to allow for the new _/SQL/ endpoint
  3. REST enable a SCHEMA
  4. Start ORDS

Kris talks about how to do all of this this here.

Download the REST driver. — Same page as the ORDS link, just scroll down and click download next to ‘Oracle REST Data Services JDBC driver.’

That’s a JAR file.

Drop it into your SQLcl/lib directory – you’ll see a TON of other JAR files there.

Start SQLcl.

It’s pretty easy.

Let’s look at that connect string.

connect HR/oracle@jdbc:oracle:orest:@http://localhost:8888/ords/hr/

The first part, you’re used to.

Oracle DB User/Password

That’s normal for the Database, but NOT normal for authenticating in ORDS.

In ORDS you’d normally authenticate via your webserver user or via OATH2. But for this feature, we’ve added database authentication. So that’s brand new for ORDS. Once authenticated, you’ll ONLY be able to hit the /SQL/ endpoint in THAT REST enabled schema.

So, when running stuff through this connection, it will ALWAYS be as HR. That’s how ORDS works, it proxy connects (a JDBC thing) from the ORDS_PUBLIC_USER to the user on the REST enabled SCHEMA when running the work behind the REST POST, PUT, GET, DELETE, HEAD…

jdbc:oracle:orest:
This is how you’ll specify the new REST driver you just downloaded.

@http://localhost:8888/ords/hr/
This is the webserver address where ORDS is running, followed by the /ORDS/schema/.

You can ALSO do this.

Now we’re authenticating by the webserver user with the SQL Developer role.

In this case, the ‘dev’ user is an ORDS/Jetty user that’s been granted the SQL Developer role.

THIS user can hit ANY REST enabled SQL endpoint. So she can run stuff as HR or as ORDS_DEMO – both schemas which have been REST enabled.

Run your stuff

We support a LOT of stuff on this new JDBC driver. You can see the list here.

REST is stateless. That means you’re in AUTOCOMMIT mode. You run an UPDATE – it gets committed at the end of the call. There’s no ROLLBACK. Each request you make, each statement you run, that’s a separate call to ORDS and transaction.

You can’t touch the OS. Don’t think you can connect to ORDS with SQLcl and use a HOST command to take over a server. That’s all locked down. No SPOOLING. But you can run a INFO/DESC, a PL/SQL block, and of course your go to for all things data – SQL.

Yeah, I know my SQL is boring. But use your imagination.

Why are we doing this?

A few reasons.

Folks need/want access to databases that aren’t regularly available for database connections.

We wanted to build a webified version of SQL Developer that runs in your browser and does all the database work via REST (powered by ORDS) – so we kind of needed a REST endpoint that handles ad hoc SQL, PL/SQL, scripts, and even SQL*Plus/SQLcl commands.

You might want to make data available to your app that’s lying in another database WITHOUT using DB_LINKS. So you could do that now via a POST call assuming that other database is configured for ORDS.

This is ALL early adopter BETA. Don’t run it in production. Let us know what you think on the forums. And check out ORACLE-BASE’s article on the new ORDS feature.

We also have this project on GitHub. It allows you to run stuff over the new endpoint in ORDS via a webpage.

Download the file from GitHub, and dump them into your ORDS standalone HTDOCS folder.


Video! Using the Keyboard Shortcuts in Oracle SQLcl

$
0
0

Developers hate using a mouse. Sticking with the keyboard reminds everyone we bled all over our keyboards 20 years ago in VT100 terminals.

So we built a modern command line interface for Oracle Database and released it last year, in 2016. And knowing you already know and love the keyboard, we made sure you could do what you needed to do while building and running your scripts and statements.

So here’s a video showing you all the keyboard ‘tricks.’ They’re not really tricks though, these are DOCUMENTED!

The cheatsheet has entries for SQLDev & SQLcl…

The keyboard shortcut ‘cheat sheet’ you can find here.

And your 4 minutes and change video…

Scripts – on Spooling and Output to Screen in Oracle SQL Developer

$
0
0

When executing scripts in SQL Developer, the amount of output we display on the screen is LIMITED.

By design, we only show you 5,000 records from any query, and we limit the amount of output in total for a single script execution to 10,010 rows.

This is controlled here:

These are the defaults.

I know what you’re thinking…

Why?

A few reasons, but practicality is what it boils down to. Displaying 3M rows of script output in a GUI is not going to be fun for you or for the application. That’s a lot of machine resources to consume, and it’s more than likely to exhaust the memory allocated to the JVM. Plus, are you really going to sit there and scroll through that many rows of output?

Still, we do offer the preferences, and you’re more than welcome to increase them.

What Happens When we Hit the Limit?

Let’s say we want to build a CSV for excel. We have more than 5,000 records. Let’s just do it anyway.

SET sqlformat csv
spool c:\objects_data.csv
SELECT * FROM all_objects;
spool off

Let’s observe the output.

So the query ran, we continued to fetch the results, we just didn’t print them to the screen.

And the ‘Excel’ file?

73,000+ records. No problems.

A Few Tips for Faster Scripts

Write your script to a .SQL file.

Run it this way.

Don’t open the file, and collapse the script output panel.

This will allow the script to run faster, or at least allow SQLDev to handle the script execution as fast as possible, instead of trying to advance the script in the editor and page through the results in the open display panel.

If you want it to run even faster, use SQLcl, and use a shell/batch script to run it…and add this line to your script

SET TERMOUT OFF

SET TERMOUT OFF isn’t observed in interactive mode ONLY when embedded as part of a script being @ executed.

Building An Object Search for SQLcl

$
0
0

The search feature in SQL Developer is whiz-bang.

You’re using it, right?

Look for stuff in your database – click the binoculars/search button on the main toolbar.

But what about at the command line?

I’m guessing many of you just pluck away at ALL_ or DBA_OBJECTS. Some of you may have written some custom scripts. But, what if you burned that into SQLcl?

You can of course do this with the ALIAS command.

You can say, ALIAS XZY=query;

And then access the query by just executing XZY.

AND, you can use positional binds!

So let’s take a look. I’m going to use this query.

SELECT owner,
       object_name,
       object_type
  FROM all_objects
 WHERE object_name LIKE :SEARCH
   AND owner NOT IN (
    'SYS',
    'MDSYS',
    'DBSNMP',
    'SYSTEM',
    'DVSYS',
    'APEX_050100',
    'PUBLIC',
    'ORDS_METADATA',
    'APEX_LISTENER'
)
   AND object_type IN (
    SELECT regexp_substr(:bind_ename_comma_sep_list,'[^,]+',1,level)
      FROM dual CONNECT BY
        regexp_substr(:bind_ename_comma_sep_list,'[^,]+',1,level) IS NOT NULL
)
 ORDER BY owner,
          object_name,
          object_type;

The only tricksy-part is the code around the object type list in the second predicate. I want to feed in a list of values to be used in a WHERE IN clause. Thankfully someone else already figured that out – thanks Arunkumar!

So, with that passed in, I can search for just tables and indexes with the text EMP in the name.

No spaces on the object_type list, and make sure everything’s UPPERCASE.

If you’re not lazy, you’re not a good developer…probably. And by ‘lazy’, I mean smart. I had to spend about 15 minutes here to save myself a few seconds every time I’m going to look for objects now.

Space Report from SQL Developer..in SQLcl

$
0
0

I was using the Instance Viewer today and ran into the drill down for Storage.

Instance Viewer, whiz-bang!

Let’s take a closer look at that report.

Some of this isn’t going to work in a CLI…

I like this report enough, that I want to be able to run it in SQLcl. SQLcl is a command-line interface, so I’ll have to make a few changes.

The links won’t work, boo. Those are easy to fix up – just remove the SQLDev:Link code. The Used (proportion) column is a Gauge, so that probably won’t work either…

Side bar:

Adding Commands/Reports to SQLcl

I’m going to use the ALIAS feature to create a ‘SPACE’ command.

I’m going to need some SQL first, so I clear the Statements log panel, and run the report again to get the raw SQL.

The Statements panel is only available for Oracle THIN JDBC connections.

Now that I have the base SQL, the main thing is to handle the Gauge. So I go with a Nested Case, and I’ll use some Pipe (|) characters to denote the green, orange, and red areas.

In SQLcl, it’s easy to make this report a new command.

You can use binds too if you want…

No worries, the code is below if you want to skip a few steps.

But now I can run it:

Tip: use a fixed width font when typing and whitespace matters!

Let’s blow that up a bit…

Not sophisticated, but it’ll work.

My Question/Challenge To You

Are you using SQLcl?

If so, have you used the Alias feature?

If your answers are anything but ‘yes’ and ‘yes’ – why not? And if so, share what you’ve built, we’d love to see it!

The Code

SELECT /*+ all_rows use_concat */ ddf.file_name AS "File Name",
       ddf.tablespace_name AS "Tablespace",
       ddf.online_status AS "Status",
       to_char(nvl(ddf.bytes / 1024 / 1024,0),'99999990.000') AS "Size (MB)",
       to_char(decode(nvl(u.bytes / 1024 / 1024,0),0,nvl( (ddf.bytes - nvl(s.bytes,0) ) / 1024 / 1024,0),nvl(u.bytes / 1024 / 1024,0) ),'99999999.999') AS
"Used (MB)",
       CASE
        WHEN ddf.online_status = 'OFFLINE' THEN 'OFFLINE'
        WHEN ddf.online_status = 'RECOVER' THEN 'RECOVER'
        ELSE
            CASE
                WHEN TRIM(to_char(decode( (nvl(u.bytes,0) / ddf.bytes * 100),0,nvl( (ddf.bytes - nvl(s.bytes,0) ) / ddf.bytes * 100,0), (nvl(u.bytes,0) / ddf.bytes
* 100) ),'990') ) BETWEEN 0    AND 14 THEN '|*     | | |'
                WHEN TRIM(to_char(decode( (nvl(u.bytes,0) / ddf.bytes * 100),0,nvl( (ddf.bytes - nvl(s.bytes,0) ) / ddf.bytes * 100,0), (nvl(u.bytes,0) / ddf.bytes
* 100) ),'990') ) BETWEEN 15    AND 24 THEN '|**    | | |'
                WHEN TRIM(to_char(decode( (nvl(u.bytes,0) / ddf.bytes * 100),0,nvl( (ddf.bytes - nvl(s.bytes,0) ) / ddf.bytes * 100,0), (nvl(u.bytes,0) / ddf.bytes
* 100) ),'990') ) BETWEEN 25    AND 34 THEN '|***   | | |'
                WHEN TRIM(to_char(decode( (nvl(u.bytes,0) / ddf.bytes * 100),0,nvl( (ddf.bytes - nvl(s.bytes,0) ) / ddf.bytes * 100,0), (nvl(u.bytes,0) / ddf.bytes
* 100) ),'990') ) BETWEEN 35    AND 44 THEN '|****  | | |'
                WHEN TRIM(to_char(decode( (nvl(u.bytes,0) / ddf.bytes * 100),0,nvl( (ddf.bytes - nvl(s.bytes,0) ) / ddf.bytes * 100,0), (nvl(u.bytes,0) / ddf.bytes
* 100) ),'990') ) BETWEEN 45    AND 54 THEN '|***** | | |'
                WHEN TRIM(to_char(decode( (nvl(u.bytes,0) / ddf.bytes * 100),0,nvl( (ddf.bytes - nvl(s.bytes,0) ) / ddf.bytes * 100,0), (nvl(u.bytes,0) / ddf.bytes
* 100) ),'990') ) BETWEEN 55    AND 64 THEN '|******| | |'
                WHEN TRIM(to_char(decode( (nvl(u.bytes,0) / ddf.bytes * 100),0,nvl( (ddf.bytes - nvl(s.bytes,0) ) / ddf.bytes * 100,0), (nvl(u.bytes,0) / ddf.bytes
* 100) ),'990') ) BETWEEN 65    AND 74 THEN '|******* | |'
                WHEN TRIM(to_char(decode( (nvl(u.bytes,0) / ddf.bytes * 100),0,nvl( (ddf.bytes - nvl(s.bytes,0) ) / ddf.bytes * 100,0), (nvl(u.bytes,0) / ddf.bytes
* 100) ),'990') ) BETWEEN 75    AND 84 THEN '|********| |'
                WHEN TRIM(to_char(decode( (nvl(u.bytes,0) / ddf.bytes * 100),0,nvl( (ddf.bytes - nvl(s.bytes,0) ) / ddf.bytes * 100,0), (nvl(u.bytes,0) / ddf.bytes
* 100) ),'990') ) BETWEEN 85    AND 94 THEN '|********* |'
                WHEN TRIM(to_char(decode( (nvl(u.bytes,0) / ddf.bytes * 100),0,nvl( (ddf.bytes - nvl(s.bytes,0) ) / ddf.bytes * 100,0), (nvl(u.bytes,0) / ddf.bytes
* 100) ),'990') ) BETWEEN 95    AND 100 THEN '|**********|'
                ELSE '?'
            END
    END
AS "Used (Proportion)",
       to_char(decode( (nvl(u.bytes,0) / ddf.bytes * 100),0,nvl( (ddf.bytes - nvl(s.bytes,0) ) / ddf.bytes * 100,0), (nvl(u.bytes,0) / ddf.bytes * 100) ),'990.99'
) AS "Used (%)",
       ddf.autoextensible AS "Auto Extend"
  FROM sys.dba_data_files ddf,
       ( SELECT file_id,
                SUM(bytes) bytes
           FROM sys.dba_free_space
          GROUP BY file_id
) s,
       ( SELECT file_id,
                SUM(bytes) bytes
           FROM sys.dba_undo_extents
          WHERE STATUS <> 'EXPIRED'
          GROUP BY file_id
) u
 WHERE (
    ddf.file_id = s.file_id (+)
       AND ddf.file_id = u.file_id (+)
)
UNION
SELECT v.name AS "File Name",
       dtf.tablespace_name AS "Tablespace",
       dtf.status AS "Status",
       to_char(nvl(dtf.bytes / 1024 / 1024,0),'99999990.000') AS "Size (MB)",
       to_char(nvl(t.bytes_used / 1024 / 1024,0),'99999990.000') AS "Used (MB)",
       CASE
        WHEN dtf.status = 'OFFLINE' THEN 'OFFLINE'
        ELSE
            CASE
                WHEN TRIM(to_char(nvl(t.bytes_used / dtf.bytes * 100,0),'990.99') ) BETWEEN 0   AND  14 THEN '|*     | | |'
                WHEN TRIM(to_char(nvl(t.bytes_used / dtf.bytes * 100,0),'990.99') ) BETWEEN 15  AND  24 THEN '|**    | | |'
                WHEN TRIM(to_char(nvl(t.bytes_used / dtf.bytes * 100,0),'990.99') ) BETWEEN 25  AND  34 THEN '|***   | | |'
                WHEN TRIM(to_char(nvl(t.bytes_used / dtf.bytes * 100,0),'990.99') ) BETWEEN 35  AND  44 THEN '|****  | | |'
                WHEN TRIM(to_char(nvl(t.bytes_used / dtf.bytes * 100,0),'990.99') ) BETWEEN 45  AND  54 THEN '|***** | | |'
                WHEN TRIM(to_char(nvl(t.bytes_used / dtf.bytes * 100,0),'990.99') ) BETWEEN 55  AND  64 THEN '|******| | |'
                WHEN TRIM(to_char(nvl(t.bytes_used / dtf.bytes * 100,0),'990.99') ) BETWEEN 65  AND  74 THEN '|******* | |'
                WHEN TRIM(to_char(nvl(t.bytes_used / dtf.bytes * 100,0),'990.99') ) BETWEEN 75  AND  84 THEN '|********| |'
                WHEN TRIM(to_char(nvl(t.bytes_used / dtf.bytes * 100,0),'990.99') ) BETWEEN 85  AND  94 THEN '|********* |'
                WHEN TRIM(to_char(nvl(t.bytes_used / dtf.bytes * 100,0),'990.99') ) BETWEEN 95  AND 100 THEN '|**********|'
                ELSE '?'
            END
    END
AS "Used (Proportion)",
       to_char(nvl(t.bytes_used / dtf.bytes * 100,0),'990') AS "Used (%)",
       dtf.autoextensible AS "Auto Extend"
  FROM sys.dba_temp_files dtf,
       sys.v_$tempfile v,
       v$temp_extent_pool t
 WHERE (
    dtf.file_name = v.name
        OR dtf.file_id = v.file#
)
   AND dtf.file_id = t.file_id (+)
 ORDER BY 1

Sell Me on Oracle SQLcl in 50 Seconds

$
0
0

Why would I use SQLcl over something else?

History recap: we introduced a new command-line interface for the Oracle Database in 2016. It’s everything you like about SQL*Plus, and everything you didn’t like about SQL*Plus – fixed.

Do you have a minute?

Watch this, then take 10 seconds to consider how YOU might use it.

Sold yet?

Everyday Things, Easier

Just in 50 seconds I showed you:

  1. Object completion
  2. In-line editing
  3. Automatic SQL Formatting (pretty and CSV)

There’s so much more there though. Better takes on DESC, SQL History, new commands like CTAS, DDL. The ability to change your directory for spooling and @ files (cd).

If you’d like a longer take, here’s a 50 minute full presentation on SQLcl I did earlier this year.

Or if you just want to click through some slides right-quick:

Yes, it’s an Official Product

It’s included with your Database license. It’s distributed with the database. And it’d covered by My Oracle Support. We also give you quarterly updates with new features and bug fixes. We also have our own OTN Space Community (message board.)

Enjoy!

INFO vs DESC – Learning About your Oracle Schema Objects at the Command Line

$
0
0

Here’s an 8 minute video walk through of using INFO and DESC in Oracle SQLcl.

You can decide what works better for you – hey, maybe you’ll decide to use both!

  • INFO shows more info
  • INFO+ shows more info with more stats
  • You can INFO a package OR a package.procedure

The asterisk (*) denotes the Primary Key column(s) for the table.

Info, plus some stats.

Get a code block for an entire package or just a single procedure.

Enjoy!

PS – I made this on my new rig

Mac Mini, 4k screen, but I filmed for 1080p resolution. I’m also using a new logitech mic/HD video camera. Please let me know if the fidelity is ‘good enough.’ I’ll probably never be as fancy as Connor, but I’m aiming to get better as I go 🙂

19.X SQLcl Teaser: LIQUIBASE

$
0
0

LIQUIBASE does ‘Source Control for your Database.’

It’s an Open Source project that allows you to capture changelogs for your database, including Oracle.

What we are doing:

  1. Extending the support for Oracle to include all schema object types
  2. Building an interface directly into SQLcl via a new LB (LIQUIBASE) command
  3. Generating the changelogs for you and managing rollbacks plus the ordering for the changelogs to avoid database object dependency errors.

A Quick Demo

I said this is a teaser, and it REALLY is a teaser – so what I’m showing you isn’t everything we’re doing. And we’ll be talking much more about this when it is available officially (19.x).

But let’s cover a very common scenario this might be useful for.

What I’m going to do is capture a schema as a ‘version 0’ or base copy of my application schema code.

Then I’m going to push that version to a new staging/test environment.

Then I’m going to make a change to my schema, and capture that changelog as a version.NEXT, and generate the SQL that would be used to update my staging/test environment.

Capture Base Version

I’m going to create a directory to hold my changelogs and master/controller file in. I then connect to my application schema in SQLcl, and run the LB command.

My schema has 692 objects in it – and I’m running this on my VBOX image on my laptop, so perf times will vary…

We’re getting XML based versions of your objects via DBMS_METADATA and writing those objects to XML files, and we’re building a controller.xml file to house the changelog for this version.

When I’m done, we can peak into our version 0 directory. Each object gets its own file, and then we have the ‘master’ controller file:

We’re handling all the hard bits for you – gen the changelogs, getting ALL the object properties defined, and ordering it correctly so it can be applied and not fail due to dependency ordering.

Put it Down Somewhere Else

So I’m going to create a new user, grant that user some privs. If I wanted to – I could include that work in the changelog as a custom SQL script, but that’s not really what I want to show today…but you can customize this stuff very easily.

So I login as that new schema, and I do a ‘lb update.’

The ‘false’ says don’t worry about the SCHEMA.

The default behavior is to apply the changelogs with the schema prefix, but I captured in HR and I’m running in LB_DEMO.


As the update runs, we get a running log to the prompt, and I can watch along in the GUI if I so desire.

Let’s Start on Version.NEXT

So now I’m ready to do some more work in my schema, and push it as a new version.

First, I’m going to create a Version.NEXT folder to hold my changelogs for that version.

So I hope into my IDE or CLI, and I run my work…I’m removing a column called TWITTER_HANDLE and I’m adding one called TWITTER, but this could be anything related to the table – a new foreign key, a check constraint (IS_JSON!) – it’s going to be picked up by DBMS_METADTA, and we’re going to create an XML based changelog for this new version.

The single object changelog, via lb gen command.

I have a single XML file in my Version.NEXT folder now (EMPLOYEES_1-TABLE.XML)

Preview the Proposed Upgrade SQL

So I have my changelog for the original version. And I have my new version. What would happen if I did an update based on my new version changelog in my staging/test environment which is at the base version?

It sees that we need to remove one column, and add one column.

If I ran the lb update command, it’d actually apply the changelog live to my connected schema.

When can we have this?

Calendar year 2019 is the closest I can say, but your biggest hint is that I’m even willing to show you anything at this point.

Even further along the calendar, we want to do cool things in the GUI (SQL Developer) around better schema compares.

I’ll be talking more about this on the Conference Circuit this year, maybe KScope Seattle and GLOC in Cleveland.


Generating User DDL in SQLcl

$
0
0

The DDL command in SQLcl allows you to get the same information you’d see in an object editor’s SQL panel in SQL Developer.

Like so:

It’s all DBMS_METADATA calls under the covers…

But a user asked me recently about doing this for an actual Oracle USER, say, ‘HR’.

So let’s make this happen TODAY, versus waiting for a product enhancement update.

Make Your Own USER Command

One of the SQLcl commands is called ‘ALIAS.’ It allows you to take a query or script and map it to a new command in SQLcl. It also allows for positional :binds in your code.

So, I’m going to steal the SQL that SQL Developer uses when generating DDL for a schema, and I’m going to re-purpose it to a new command in SQLcl called ‘USER.’

Getting the SQL We Need

We just do ‘the work’ in SQL Developer, and then observe the ‘Statements’ panel.

Use the Filter feature to help you find the right Queries – SQL Developer sends a LOT to the database on your behalf.

Now that we have our code identified, copy and paste it into SQLcl (although I did format it first in SQL Developer so it’d be easier to read here.

The ALIAS Command

No worries, I share the code below!

There’s only one :bind in the query – that’s ‘:NAME’ – so when I invoke the command, I only need one parameter – the name of the USER I want the DDL for.

Here’s the SQL you need (tested on an 18c Oracle Database)

SELECT DBMS_METADATA.GET_DDL(
    'USER',
    :NAME
)
  FROM DUAL
UNION ALL
SELECT DBMS_METADATA.GET_GRANTED_DDL(
    'ROLE_GRANT',
    GRANTEE
)
  FROM DBA_ROLE_PRIVS
 WHERE GRANTEE = :NAME
   AND ROWNUM = 1
UNION ALL
SELECT DBMS_METADATA.GET_GRANTED_DDL(
    'SYSTEM_GRANT',
    GRANTEE
)
  FROM DBA_SYS_PRIVS          SP,
       SYSTEM_PRIVILEGE_MAP   SPM
 WHERE SP.GRANTEE = :NAME
   AND SP.PRIVILEGE  = SPM.NAME
   AND SPM.PROPERTY <> 1
   AND ROWNUM        = 1
UNION ALL
SELECT DBMS_METADATA.GET_GRANTED_DDL(
    'OBJECT_GRANT',
    GRANTEE
)
  FROM DBA_TAB_PRIVS
 WHERE GRANTEE = :NAME
   AND ROWNUM = 1

So when you need a command that we haven’t built yet, build it yourself 🙂

But Wait, This Won’t Run – There’s no Semicolons!

Dawn pointed out in the comments, that DBMS_METADATA leaves out statement delimiters by default. This is by design as a program running SQL directly on a driver, say JDBC, doesn’t need to include semicolons.

But, we’re generating code which will ultimately be ran ad-hoc most likely in SQLcl or some other query interface.

To get the semicolons, just put this in your login.sql or run this before running your new USER command:

BEGIN
dbms_metadata.set_transform_param(
        dbms_metadata.session_transform,
        'SQLTERMINATOR',TRUE);
END;
/

And now when we run the command again, we get…

SQL> SET long 1000
SQL> USER HR
 
DBMS_METADATA.GET_DDL('USER',:NAME)
--------------------------------------------------------------------------------
 
   CREATE USER "HR" IDENTIFIED BY VALUES 'S:74A7D802F0664676487836016A9F085517CC
153D4DE1384C459CCD17DD75;T:C7067A2E84D0376E29B317DF9CFE73087D379589842FB537C46D7
406C388F735CD6ED4FC83222D9F340DBDA9A3443192C690764BFE4BB46260B9B3B5980F839B7252F
969AA89FB889EBD1115D6A8955D'
      DEFAULT TABLESPACE "USERS"
      TEMPORARY TABLESPACE "TEMP";
 
 
   GRANT "DBA" TO "HR";
   GRANT "SELECT_CATALOG_ROLE" TO "HR";
 
DBMS_METADATA.GET_DDL('USER',:NAME)
--------------------------------------------------------------------------------
   GRANT "EXECUTE_CATALOG_ROLE" TO "HR";
   GRANT "ROLE_TD" TO "HR" WITH ADMIN OPTION;
   GRANT "ROLE_TD3" TO "HR" WITH ADMIN OPTION;

Notice also I’ve set LONG to a bigger number as this package returns the DDL as a CLOB, and SQLcl will need more characters available to display the entire script.

Cancelling in Oracle SQLcl

$
0
0

In this post, I’m going to show you how to:

  • ‘cancel’ yourself out of a buffer/command
  • cancel a query
  • ctrl+c yourself back to the shell/cmd prompt

Starting over on your query/script

Ctrl+C will kill you out of SQLcl completely if you’re in the editor. This is a bug on one of the components we’re using (JLINE), and we have a fix scheduled for SQLcl 19.2. But in the meantime, using the ESC key will provide you all the relief necessary.

You start typing, and at some point, you realize, there’s no fixing this, let’s just start over.

Hit ESC to exit the editor, and we’ll leave the buffer contents instact

Again, we hope to have this working with Ctrl+C in version 19.2 of SQLcl.

Cancelling a query

You’ve successfully submitted your query or anonymous block to the database. Now we want to cancel it. Ctrl+C will send that request.

Now at that point, it’s up to the jdbc driver and the database to decide how and WHEN to handle that request.

Here’s an example.

Issue your Ctrl+C and wait.

I’m tired of waiting just get me out of here already

Just hit ctrl+c, again.

Yes, I know, I’m not the best typist out there.

Making this more transparent going forward

We also plan in version 19.2 to print status messages to the window. For example, when you send the query cancel, we’ll register that and let you know we’re working on it, so you don’t get frustrated and hit the button again.

I’m also told that in 19.2 with the new JLINE toys, we’ll be able to do some wicked-cool vi integration.

All of the Formatting Options for your Query Results

$
0
0

This would be for both SQLcl and SQL Developer, but a few are reserved for SQLcl today.

You have two options for formatting your query output:

  • SET SQLFORMAT
  • Adding a comment to your query /*json-formatted*/
Setting it for the session or just for a specific query.

Here are some slides demonstrating all the cool format options you have available:

Some things you may have forgotten about…

Just run ‘help set sqlformat’ for all the details:

Kris talks about the new pattern matching/highlighting feature included with ANSICONSOLE now in version 19.1.

And, did you know you can create your own formatting options? Just write a bit of code (js, python, perl, whatever Nashorn supports!)

My JS is horrible so I just steal it off the internet.

SQL Developer Web, Too!

When we share code between our tools, everyone wins!

Liquibase and SQLcl

$
0
0

I haven’t had time to talk about this as much as I would like to here, but I at least have some slides I can share.

If you’re going to check out the LB command in SQLcl, be aware that in 19.2, there’s a bug preventing it from working in your 11gR2 database, but we have that scheduled to be fixed in a 19.2.1 patch – due SOON.

Note also that the SQLcl that ships with SQL Developer does NOT have the liquibase functionality – you need to download the standalone offering.

Docs & Examples?

You bet!

Click the pic to get to the Docs/Examples

And if you’re IN SQLcl, we have help and examples burned in there as well.

All of the LB commands
the GENSCHEMA does what it sounds like.

SQLcl and the Load (CSV) command

$
0
0

I was going to refer someone on StackOverflow to my post on the LOAD command in SQLcl, but then I realized I hadn’t written one yet. Oops. So here’s that post.

One of the new (that is, a command in SQLcl that is NOT in SQL*Plus) commands is ‘LOAD.’

You can find all the new commands highlighted if you run ‘help’

Guess what this does…

No need to guess what LOAD does, just consult the help.

SQL> help load
LOAD
-----
 
Loads a comma separated value (csv) file into a table.
The first row of the file must be a header row.  The columns in the header row must match the columns defined on the table.
 
The columns must be delimited by a comma and may optionally be enclosed in double quotes.
Lines can be terminated with standard line terminators for windows, unix or mac.
File must be encoded UTF8.
 
The load is processed with 50 rows per batch.
If AUTOCOMMIT is set in SQLCL, a commit is done every 10 batches.
The load is terminated if more than 50 errors are found.

A quick demo

Let’s SPOOL some CSV to a file, then use the LOAD command to put that data into a new table.

SQL> SET sqlformat csv
SQL> cd /Users/thatjeffsmith
SQL> spool objects.csv
SQL> SELECT * FROM all_objects fetch FIRST 100 ROWS ONLY;
"OWNER","OBJECT_NAME","SUBOBJECT_NAME","OBJECT_ID","DATA_OBJECT_ID","OBJECT_TYPE","CREATED","LAST_DDL_TIME","TIMESTAMP","STATUS","TEMPORARY","GENERATED","SECONDARY","NAMESPACE","EDITION_NAME","SHARING","EDITIONABLE","ORACLE_MAINTAINED","APPLICATION","DEFAULT_COLLATION","DUPLICATED","SHARDED","CREATED_APPID","CREATED_VSNID","MODIFIED_APPID","MODIFIED_VSNID"
"SYS","I_FILE#_BLOCK#","",9,9,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_OBJ3","",38,38,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_TS1","",45,45,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_CON1","",51,51,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","IND$","",19,2,"TABLE",07-FEB-18,21-NOV-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","CDEF$","",31,29,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","C_TS#","",6,6,"CLUSTER",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",5,"","METADATA LINK","","Y","N","","N","N",,,,
"SYS","I_CCOL2","",58,58,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_PROXY_DATA$","",24,24,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_CDEF4","",56,56,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_TAB1","",33,33,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","CLU$","",5,2,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_PROXY_ROLE_DATA$_1","",26,26,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_OBJ1","",36,36,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","UNDO$","",15,15,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_UNDO2","",35,35,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_TS#","",7,7,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_FILE1","",43,43,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_COL2","",49,49,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_OBJ#","",3,3,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","C_OBJ#","",2,2,"CLUSTER",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",5,"","METADATA LINK","","Y","N","","N","N",,,,
"SYS","I_CDEF3","",55,55,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","C_COBJ#","",29,29,"CLUSTER",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",5,"","METADATA LINK","","Y","N","","N","N",,,,
"SYS","CCOL$","",32,29,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_OBJ5","",40,40,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","PROXY_ROLE_DATA$","",25,25,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_CDEF1","",53,53,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","C_USER#","",10,10,"CLUSTER",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",5,"","METADATA LINK","","Y","N","","N","N",,,,
"SYS","C_FILE#_BLOCK#","",8,8,"CLUSTER",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",5,"","METADATA LINK","","Y","N","","N","N",,,,
"SYS","FET$","",12,6,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_CON2","",52,52,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_OBJ4","",39,39,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","CON$","",28,28,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_CDEF2","",54,54,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","ICOL$","",20,2,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_COL3","",50,50,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_CCOL1","",57,57,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","COL$","",21,2,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_ICOL1","",42,42,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","UET$","",13,8,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","PROXY_DATA$","",23,23,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","USER$","",22,10,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_PROXY_ROLE_DATA$_2","",27,27,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_OBJ2","",37,37,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","TAB$","",4,2,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_COBJ#","",30,30,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_USER#","",11,11,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","FILE$","",17,17,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","OBJ$","",18,18,"TABLE",07-FEB-18,15-OCT-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","TS$","",16,6,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_UNDO1","",34,34,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","BOOTSTRAP$","",59,59,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_COL1","",48,48,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_FILE2","",44,44,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_IND1","",41,41,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_USER2","",47,47,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_USER1","",46,46,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","SEG$","",14,8,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","OBJERROR$","",60,60,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:26","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","OBJAUTH$","",61,61,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:26","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_OBJAUTH1","",62,62,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:26","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_OBJAUTH2","",63,63,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:27","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","C_OBJ#_INTCOL#","",64,64,"CLUSTER",07-FEB-18,07-FEB-18,"2018-02-07:19:20:27","VALID","N","N","N",5,"","METADATA LINK","","Y","N","","N","N",,,,
"SYS","I_OBJ#_INTCOL#","",65,65,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:27","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","HISTGRM$","",66,64,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:27","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_H_OBJ#_COL#","",67,67,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:27","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","HIST_HEAD$","",68,68,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:27","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_HH_OBJ#_COL#","",69,69,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:27","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_HH_OBJ#_INTCOL#","",70,70,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:27","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","FIXED_OBJ$","",71,71,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:27","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_FIXED_OBJ$_OBJ#","",72,72,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:27","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","TAB_STATS$","",73,73,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:28","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_TAB_STATS$_OBJ#","",74,74,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:28","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","IND_STATS$","",75,75,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:28","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_IND_STATS$_OBJ#","",76,76,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:28","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","OBJECT_USAGE","",77,77,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:28","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_STATS_OBJ#","",78,78,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:28","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","PARTOBJ$","",79,79,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:28","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_PARTOBJ$","",80,80,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:28","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","DEFERRED_STG$","",81,81,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:28","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_DEFERRED_STG1","",82,82,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:28","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","DEPENDENCY$","",83,83,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:28","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","ACCESS$","",84,84,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_DEPENDENCY1","",85,85,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_DEPENDENCY2","",86,86,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_ACCESS1","",87,87,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","USERAUTH$","",88,88,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_USERAUTH1","",89,89,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","UGROUP$","",90,90,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_UGROUP1","",91,91,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_UGROUP2","",92,92,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","TSQ$","",93,10,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","SYN$","",94,94,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","VIEW$","",95,95,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","TYPED_VIEW$","",96,96,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","SUPEROBJ$","",97,97,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_SUPEROBJ1","",98,98,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_SUPEROBJ2","",99,99,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","SEQ$","",100,100,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_VIEW1","",101,101,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
 
100 ROWS selected. 
 
SQL> spool off
SQL> CREATE TABLE demo_load AS SELECT * FROM all_objects WHERE 1=2;
 
TABLE DEMO_LOAD created.
 
SQL> LOAD demo_load objects.csv
--Insert failed in batch rows  101  through  103 
--ORA-01400: cannot insert NULL into ("HR"."DEMO_LOAD"."OWNER")
--Row 101 data follows:
INSERT INTO DEMO_LOAD(OWNER,OBJECT_NAME,SUBOBJECT_NAME,OBJECT_ID,DATA_OBJECT_ID,OBJECT_TYPE,CREATED,LAST_DDL_TIME,TIMESTAMP,STATUS,TEMPORARY,GENERATED,SECONDARY,NAMESPACE,EDITION_NAME,SHARING,EDITIONABLE,ORACLE_MAINTAINED,APPLICATION,DEFAULT_COLLATION,DUPLICATED,SHARDED,CREATED_APPID,CREATED_VSNID,MODIFIED_APPID,MODIFIED_VSNID)
VALUES ('','','',NULL,NULL,'',to_date(''),to_date(''),'','','','','',NULL,'','','','','','','','',NULL,NULL,NULL,NULL);
--Row 102 data follows:
INSERT INTO DEMO_LOAD(OWNER,OBJECT_NAME,SUBOBJECT_NAME,OBJECT_ID,DATA_OBJECT_ID,OBJECT_TYPE,CREATED,LAST_DDL_TIME,TIMESTAMP,STATUS,TEMPORARY,GENERATED,SECONDARY,NAMESPACE,EDITION_NAME,SHARING,EDITIONABLE,ORACLE_MAINTAINED,APPLICATION,DEFAULT_COLLATION,DUPLICATED,SHARDED,CREATED_APPID,CREATED_VSNID,MODIFIED_APPID,MODIFIED_VSNID)
VALUES ('100 rows selected. ','','',NULL,NULL,'',to_date(''),to_date(''),'','','','','',NULL,'','','','','','','','',NULL,NULL,NULL,NULL);
--Row 103 data follows:
INSERT INTO DEMO_LOAD(OWNER,OBJECT_NAME,SUBOBJECT_NAME,OBJECT_ID,DATA_OBJECT_ID,OBJECT_TYPE,CREATED,LAST_DDL_TIME,TIMESTAMP,STATUS,TEMPORARY,GENERATED,SECONDARY,NAMESPACE,EDITION_NAME,SHARING,EDITIONABLE,ORACLE_MAINTAINED,APPLICATION,DEFAULT_COLLATION,DUPLICATED,SHARDED,CREATED_APPID,CREATED_VSNID,MODIFIED_APPID,MODIFIED_VSNID)
VALUES ('','','',NULL,NULL,'',to_date(''),to_date(''),'','','','','',NULL,'','','','','','','','',NULL,NULL,NULL,NULL);
--Number of rows processed: 103
--Number of rows in error: 3
1 - WARNING: LOAD processed WITH errors
SQL> commit;

Wait, what’s with the 3 failed rows at the end?

If I tail the csv file I created, there’s a few extra lines due to feedback…hence the 3 rows failed to run – which is good 🙂

Browsing the TABLE in SQL Developer it looks like it ran just as it should.

DATEs and TIMESTAMPs came in just A-OK as well 🙂

If I go in and remove those 3 lines, truncate the table, and run the LOAD again…

Cleaner 🙂

Is this the BEST way to load CSV?

Probably not – I would still advise folks it’s much faster to use sqlldr or External TABLEs, but it would be hard to argue this isn’t simpler, especially when you’re dealing with reasonable amounts of data.

Reasonable being a number of rows that are adapt at being INSERTed one at a time.

SQLcl on Oracle Cloud Infrastructure (OCI)

$
0
0

There’s nothing like the feeling of being IN your Oracle Database with just a few keystrokes..

And blammo!

I am SO ready to SELECT * FROM EMPLOYEES; right now…

Did you know that SQLcl is available for you in our OCI networks?

After generating myself a set of SSH keys, I was easily able to login to my new Oracle Linux 7.7 environment and get a Bash prompt.

To get started working with an Oracle Database, I need to get SQLcl. I don’t have to download it from Oracle.com with a Single Sign On, I can simply grab it via rpm or yum.

yum install sqlcl

There’s a dependency on Oracle Java 8, so if you don’t have that already, don’t freak out if you see a much bigger payload than SQLcl by itself.

If I’ve created an Oracle Database somewhere, I could connect right away, but I want to work with Oracle Autonomous.

I’ve taken advantage of the Always Free option for an Autonomous Database and created and instance of one of those.

I want to connect to my DB with SQLcl, so I need that Wallet.

Any connections to Autonomous Database Serverless require SSL, whether you’re on a private VCN or coming in from the public Internet, so I’m going to grab that zip.

I’m lazy, and I like GUis, so I use a ssh client (CyberDuck) to move my Zip file up to my Infrastructure box.

Now SQLcl can use my wallet to negotiate the SSL

I’m pretty sure there’s a developer SDK and a REST API available for OCI that lets you grab that w/o jumping through a browser, but that’s a brand new post by itself..so for now, quick and dirty GUI it is.

Some cool things to digest here…

Tricks to be keen about…

  • set cloudconfig wallet.zip
  • show tns

That set command loads up the SQLNet and TNSNames .ora files for your session. The show tns gives you a listing of the TNS entries available for a connection.

And now I can start playing with my data.

json-formatted is a new sqlformat available in SQLcl.

And of course I can go play with that same data in SQL Developer Web…no wallet required.

Autonomous ORDS/SQL Developer Web upgrade is rolling out.

I put my Always Free Autonomous TP instance in Ashburn, and it’s already running the ORDS 19.3 bits. 19.3 is rolling out to all the data centers in OCI, so if you still see 19.2.1 in yours, don’t freak out.

No net new features from 19.2.1 to 19.3, just bug fixes.

19.4 is on its way…soon.

New screens for Creating/Managing USERS (rest enabling them!) and loading CSV/Excel data to net new tables.

User Management – create, edit users, give roles, REST enable them, get their REST API base URLs…

And for data loading…drag and drop your stuff directly to a worksheet for a new table.

We’ll look at the incoming stream of data, and best-guess the column definitions for you. Failed rows are logged to an error table so you can fix them up later.

5 Minutes Demo: Using Liquibase in SQLcl to version Oracle Database

$
0
0

I previously talked about our new Liquibase support in SQLcl, so you may want to start there if it’s brand new to you.

I shared some slides, and links to the Docs, but not much in the way of demo. I wanted to fix that today, so here’s a quick video showing off SQLcl and Liquibase.

Technically speaking, it’s 5 and a half minutes.

The Plot

I have two schemas:

  1. LB_DEV
  2. LB_UAT

I’m going to create two tables in LB_DEV, and call this first go, version 1 (v1).

Very boring, by design.

I’m going to generate a Liquibase changelog of this schema using SQLcl.

LB_DEV SQL> lb genschema

I’m going to update my UAT environment, basically ‘Install v1.’ Note, this is where the cool part comes in – in the previous step, ‘genschema’ has asked SQLcl to create the ‘controller.xml’ and all of the referenced object xml files FOR ME. And, SQLcl has ordered them such in the controller that there won’t be any dependency issues going forward.

Whiz-bang!

LB_UAT SQL> lb update controller.xml false

I’m going to do some work in Dev, add some fancy things like constraints, including a foreign key.

I’m going to see what that would like like in UAT BEFORE I upgrade it to ‘v2.’

LB_UAT SQL> lb updatesql controller.xml false

UPDATESQL says, hey, just show me the SQL you would theoretically run if you were to do an UPDATE with this controller.xml changeset.

Now, let’s actually do the update to ‘v2.’

LB_UAT SQL> lb update controller.xml false

And finally I’m going to show how you can do ‘normal Liquibase stuff’ using SQLcl, like use sqlFile to run some INSERTs to populate my base tables.

Updated Slides

Newer, and hopefully better, slides.


Your SQL Developer & SQLcl Command Cheat Sheet

$
0
0

When I first got started with UNIX and vi/emacs, I lived and died by a nice, printed cheat sheet. And after awhile, I didn’t need it anymore. I’m pretty sure I still have it over in my garage…squirrel (I’m easily distracted)!

Anyways, I got a request from @manivel_j for something similar, but for all of the client based commands in SQL Developer.

What do we mean, SQL Developer based commands? These are things the CLIENT (SQLDev) is responsible for, versus the SERVER (the Oracle Database). One of the most well known of these is DESC(ribe).

DESC is old, use INFO instead!

If you run that directly against a Database, it won’t work. But used in SQLPlus, SQLcl, SQL Developer, or any other tool that has decided to implement a version of that command – you’re good to go.

There’s an easy way to see these list of commands, simply type and run ‘help’ in either SQLcl or SQL Developer.

Execute the statement using F5 (the script engine)

And if you say ‘help <command>’ you can get the help for that specific command.

Some help topics are more verbose than others 🙂

Cheat-sheet for non-SQL*Plus commands

SQL*Plus is the granddaddy of Oracle clients. It originated many of the commands we know, and mostly love, today. I’m going to talk about the ones that we have added. You might also want to see this post on SQLcl commands being ran in SQL Developer.

So what’s new?

In SQLcl, if you run HELP, the commands we’ve added are underlined (OS X) or highlighted (Windows).

I really prefer the OS X bash terminal, but these all work just fine in WIndows.

Looking for a cheat-sheet for keyboard shortcuts?

In alphabetical order…

ALIAS – you can build your own commands, with :binds, to make tedious SQL or PL/SQL to type, easy to run.

APEX – Get a list of APEX applications in your schema, or import and export them, to and from files.

BRIDGE – Let’s you easily move objects and data from one database to another, w/o DB_LINKs. This one is fairly old, going back as far as 2010.

CD – Change the current working directory. As in, tell the client where you want to SPOOL files to, or where you want to look for files to @execute. Crazy idea, right?

CTAS – Create TABLE As SELECT. We’ll help you build a usuable, CTAS command for a given table or view.

DDL – Generate DDL for an object. Uses DBMS_METADATA, but you don’t have to write that code. Also comes with settings, via SET DDL.

FIND – can you find my file on the $SQLPath?

FORMAT – invoke the SQL Developer formatter on the contents of your command buffer, a file, or an entire directory.

HISTORY – all the queries you’ve ever ran in SQLcl.

INFORMATION INFO or INFO+…a better, much better version of DEScribe. Like, what’s my primary key, do I have any stats, comments, foreign keys, etc? DESC still works of course.

Liquibase (LB) – version control your Oracle Database Schema – either do an UPDATE, ROLLBACK, or we’ll generate changesets for you using GENSCHEMA or GENOBJECT.

LOAD – Take CSV and shove it into a table. Useful when you don’t have SQL*Loader handy.

OERR – Oracle Errror Message Lookups, for ORA and PLS errors. Because Google take too many clicks.

Message picked totally at random.

REPEAT – run a command, repeatedly, for a set amount of times, at a certain delay…refresh the screen each run. Tail the alert log?

REST – ORDS stuff, get your ORDS modules, enabled schemas, privileges, etc.

SCRIPT – invoke the javascript engine to do stuff on the client, often in coordination with the things you’re doing in your Oracle connection. Like say...take all these files and load them as BLOBs to my table!

SODA – take your {json} file and create a new collection (table) in your Oracle Database. It’s much more than that, but you get the idea.

SSHTUNNEL – we’ll build a SSH Tunnel you can use to connect to databases not directly accessible via the Listener port.

TNSPING – can we find your database, and, how far away is it?

Speaking of SHOW…

VAULT – using Hashicorp to securely store your Oracle database credentials.

WHICH – similar to find, goes through your $SQLPath and tells you exactly what file will be executed when you @file

SET and SHOW

These aren’t new, but there are MANY new things you can SET or SHOW for SQLcl and SQL Developer.

SHOW ALL and SHOW ALL+ will show you, a lot. SHOW TNS is fun. So is SHOW CONNECTION.

For SET, don’t miss out on SET DDL and SET SQLFORMAT.

Some of these work in SQLDev Web, too!

I love it when a plan comes together!

It’s common code across all of our tools, whether you’re in a desktop GUI, CLI, or your browser. There are exceptions of course, but INFO in one tool will show the same output in our other tools.

SQLcl & Liquibase: running Oracle Scripts in a changeSet

$
0
0

Liquibase has offered for a very long time, the ability to define changeSets using SQL. In fact, according to their survey, it’s the MOST popular way of developing your schema deployment scripts.

preferred method of authoring database scripts 2019
Top 10 findings liquibase survey 2019 courtesy https://www.liquibase.org/

And it’s no surprise as to to why. You can do pretty much anything you want for a change, vs being limited by the change types provided by Liquibase. Say, if you can’t add the column in the way you wanted to via the addColumn change type, it’s very easy for a database developer to just write their standard…

ALTER TABLE XYZ ADD COLUMN ABC VARCHAR2(25)...

What if you could run SQLcl type stuff though?

I have a dumb example. It’s really here just to show you what we CAN do, not what you SHOULD do. For example, the SPOOL and LOAD commands are quite powerful for writing files and loading data to a table.

What if I wanted to create a changeSet that grabbed data from an existing table and wrote it to a file and load it to another table (silly because we could just do an INSERT AS SELECT)?

Well, we can!

<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog 
        xmlns="http://www.liquibase.org/xml/ns/dbchangelog" 
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
        xmlns:n0="http://www.oracle.com/xml/ns/dbchangelog-ext" 
        xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog 
        http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.6.xsd">
        <changeSet id="abcd1234567890" author="Generated" failOnError="false"   runAlways="true" >
                <n0:runOracleScript objectName="MY_SCRIPT" objectType="SCRIPT" ownerName="FUNKY" sourceType="STRING" replaceIfExists="false">
                        <n0:source><![CDATA[
cd C:\liquibase\load_table
set sqlformat csv
set feedback off
spool locations.csv
select * from hr.locations;
spool off
create table locations as select * from hr.locations where 1=2;
load locations locations.csv
commit;
                        ]]></n0:source>
                </n0:runOracleScript>
        </changeSet>
</databaseChangeLog>

The script is pretty simple and self-explanatory. A couple of SET commands, a CD, SPOOL, and LOAD.

What’s not straightforward is how this is possible. You’ll note that this isn’t a sql or sqlFile changeType in Liquibase. No, it’s a custom one named ‘runOracleScript.’

n0 is the XML Namespace for our Oracle Liquibase extension.

runOracleScript says, ok, run this code through SQLcl – NOT through Liquibase’s SQL execution routine.

objectType=’SCRIPT’ means we’re going to provide the code right here, but we also support ‘FILE’ and ‘URL’.

Imagine ALL the things you can do in SQLcl and your scripts…that’s now possible in a changeSet!

Simpler, but powerful CLI features

Need something a bit more dynamic? If only we had substitution variables…but wait, we do!

For generating an object –

For seeing what an update would do –

Updatesql shows you what WOULD run if you submitted this changeSet via an update.
The & feature ‘just works’

Lots and lots of changes coming soon to v20.2

More commands, more features, lots of bug fixes, and enhanced documentation are on the way! And as soon as it’s released you can expect more blogs and videos.

When? The same time the .2 releases normally come out – about 6-7 months into the calendar year.

Announcing SQL Developer, SQL Developer Data Modeler, & SQLcl versions 20.2

$
0
0

It’s been 177 days since our last release – version 19.4 at the end of December, 2019! Obviously, much has happened since then.

The astute amongst you would have noticed there WAS NOT a version 20.1 release. This was a conscious decision on our part to hit ‘pause’ vs bragging about 20.1 in the middle of a global pandemic.

However, there are bug fixes that we need to get into your hands, and our developers have never stopped working on new features, so we ARE releasing version 20.2, TODAY.

Download Links

What you’re getting, high level

SQL Developer

Probably the biggest thing coming down for version 20.2 is a more flexible PL/SQL Debugger.

DBMS_DEBUG vs JDWP

This just makes a secondary jdbc (or sqlnet) connection to the database to run the debug over, vs JSQP which has the Database itself reach out and open a connection to your client. So no firewall and ACL rules to define. You can imagine this would also be MUCH easier to navigate in a world where your database is running in the Cloud.

Another small, but useful feature – finding stuff in your plans.

Search. Yes, we should have added this years ago.

I don’t want to undersell the BORING stuff. There’s also about 100 bug fixes.

Data Modeler

I don’t have much to say here, although there are also a ton of bug fixes and feature tweaks. I’ll be talking about this more later this Summer.

SQLcl

LOADS of great stuff here. Keyword being ‘load.’

If you like the ‘load’ command, you’ll now have the ability to configure exactly HOW the data is being loaded via two new SET commands.

SET LOAD & SET LOADFORMAT

Image
So I can define how DATEs are to be defined or how large of batch size I’d like.

Even bigger, a brand new command!

OCI
So, so, so many fun demos to build with this!

Liquibase

Our automated change management feature has huge changes as well for version 20.2. You get support for generating HTML docs, doing DIFFs, and also seeing what SQL will run on a rollback via the rollbacksql command.

Here’s a DIFF

Image
How is SCHEMA A looking compared to SCHEMA B?

A note on SQLFORMAT…

ANSICONSOLE or ‘pretty printing’ of SQL Results is now the default.

We added a readme for SQLcl, and THIS is noted, in case you have production scripts relying on the old school formatting option.

If you don’t like this, or MUST HAVE SQL*Plus style reporting, then you’ll want to SET SQLFORMAT to null in your login.sql.

What about ORDS?

ORDS and SQL Developer Web version 20.2 will be available shortly. Just a few more tires to kick.

Using SQLcl to write out BLOBs to files…in 20 lines of js

$
0
0

In fairness, it’s really only 8 lines of code, but @krisrice is a fan of comments.

Here’s the setup:

I have a query I want to run, where I can dump out a BLOB column in my table to separate files. And I need to automate this.

A customer, in not so many words, more or less

Using the SCRIPT command in SQLcl

SQLcl will let you run whatever SQL or PL/SQL you want. And it has quite a few commands that help you, like DDL and INFO.

But, it ALSO has a mechanism where you can call out to the Nashorn engine in Java to execute things written in JavaScript (or jython or about 20 other dialects).

Kris has quite a few examples up on his blog and gisthub, but I couldn’t find one that did this particular task. I asked him, he pasted it to me, and now I share it with YOU.

Grab this text, and save it as a .SQL file

script
// issue the SQL
 
var binds = {}
var  ret = util.executeReturnList('select id,file_name,content from media',binds);
 
// loop the results
FOR (i = 0; i < ret.length; i++) {
  // debug IS nice
  ctx.write( ret[i].ID  + "\t" + ret[i].FILE_NAME+ "\n");
 
  // GET the BLOB stream
  var blobStream =  ret[i].CONTENT.getBinaryStream(1);
 
  // GET the path/file handle TO WRITE TO
  var path = java.nio.file.FileSystems.getDefault().getPath(ret[i].FILE_NAME);
 
  // dump the file stream TO the file
  java.nio.file.Files.copy(blobStream,path);
 
}
/
!dir

Now, this SQL script won’t run in SQL*Plus. And if you look at it, it’s really js…but the first line is the SCRIPT command in SQLcl, what follows is a chunk of code, and at the end it says ‘run this script’ and on WINDOWS, give me a directory listing.

The Query is hard-coded on line #5. It assumes you have a filename and a BLOB that you can access.

The file name the blob column names can be changed in the query, but then you’ll ALSO need to change the CONTENT and FILE_NAME attributes for the ret[i] array mentioned on lines 10, 13, & 16.

Once you have your query settled, you’re ready to run the script. Simply @script.sql in SQLcl.

It will print to screen a listing of the files as it’s writing out the same files to the current working directory. At the end, it issues the DIR command to show you the actual files.

Ta-da!

And if I browse to that directory, I can see my files came out, A-OK!

You can almost have SQLcl do ANYTHING

Load a directory of files as BLOBs, check.

Building an Excel worksheet, check.

Loading Data into Oracle with SQLcl

$
0
0

When it comes to load data, especially very large amounts of data – if Data Pump is available, use that. If you can create an External Table, do that. If you have access to SQL*Loader, use that.

But.

Not only is that a lot of IF’s, there’s also the question as to how developer-friendly those interfaces can be, especially if you don’t live in an Oracle Database each and every day.

As a quick aside, I recommend you read Tim’s latest rant. By the way, he calls his posts rants, that’s not me ascribing a pejorative to the good Doctor. His post title pretty much says it all, The Problem With Oracle : If a developer/user can’t do it, it doesn’t exist.

The TL/DR; take is pretty simple – if the interface isn’t readily available AND intuitive, a developer isn’t very likely to use it. Or bother to take the time to learn it.

If you’re still with me, what I wanted to talk about today is the LOAD command in SQLcl.

Yes, I know it’s not as fast as SQL*Loader. And I’ve extolled the virtues of SQL*Loader before! But maybe you don’t have an Oracle Client on your machine, and maybe you lack the patience to learn the CTL file syntax and yet another CLI.

So instead, if you’re already in SQLcl and you simply want to batch load some data to a table, I invite you to check out the latest and greatest we’ve introduced in version 20.2 of SQLcl.

New for 20.2

  • set load – # of rows per batch, # of errors to allow, date format, truncate before we go?
  • set loadformat – CSV? are there column names in line 0/1? what’s the delimiter? “strings” or ‘strings?’

So while the LOAD command isn’t new, the amount of flexibility you have now is very much new. You’re no longer ‘stuck with the defaults.’

Let’s load some funky data.

We’ll throw in a date column, some weird string enclosures, no column headers, a blank line to start things…here’s what one row looks like

4|^|"US"|^|"Much like the regular bottling from 2012, this comes across as rather rough and tannic, with rustic, earthy, herbal characteristics. Nonetheless, if you think of it as a pleasantly unfussy country wine, it's a good companion to a hearty winter stew."|^|"Vintner's Reserve Wild Child Block"|^|87|^|65|^|"Oregon"|^|"Willamette Valley"|^|"Willamette Valley"|^|"Paul Gregutt"|^|"@paulgwine?ÿ"|^|"Sweet Cheeks 2012 Vintner's Reserve Wild Child Block Pinot Noir (Willamette Valley)"|^|"Sweet Cheeks"|^|436|^|08-JUL-2020 13.54.37

Yes, I have some wine review data – thanks Charlie for the heads-up!

So let’s setup our config for the run, starting with the profile of the data itself.

Our defaults aren’t going to cut it.

Column_names, delimiter, enclosure, and skip_rows all need tweaked.
Setting enclosure_left, without providing something for the right, defaults to both.

Now let’s look at the overall load settings.

Let’s use truncate on, and set the date_format, and bump up the batch_rows.
We’ll play with a few different batch_row sizes and commit levels, so the TRUNCATE will come in handy.

Let’s do this!

The command syntax is simply, load <table_name> <file_name>…and that’s it.

I actually got this to load the first time I tried! I’ll be having a cookie/beer later as a reward 🙂

Now, what if we did something stupid, like set batch_rows to 5? There may be a case where that makes sense, but not here, not for this scenario.

I know someone will say that 4 seconds is slow compare to SQL*Loader, and you’d be right.

For quick and dirty data work, 50k rows in 4 seconds will do just fine. And nothing to install or configure, just unzip SQLcl and go. I’m pretty sure your average developer could use this interface and feature.

Trivia: This is the same code we use in…

…SQL Developer Desktop and ORDS. When you REST enable a table and use the batch loading POST endpoint, that’s running the same code SQLcl is using!

Viewing all 125 articles
Browse latest View live