Skip to main content

How to use Oracle Data Dictionary?

Data Dictionary is the metadata of the oracle database. It contains all informations about the database objects. There are sets of views in the Data Dictionary. They are in the SYSTEM tablespace and have prefixes.

 User’s schema
 User can access
 Database Administators view

Returns all objects and object states of your schema:
SELECT object_name, object_type, status FROM USER_OBJECTS;

Returns all objects and object states to which you have access:
SELECT object_name, object_type, status FROM ALL_OBJECTS;

Returns all objects and object states of the database per owner basis:
SELECT owner, object_name, object_type, status FROM SYS.DBA_OBJECTS;

There are also performance views in the Data Dictionary which prefixed V$.

Following statement shows who is connected to the database:
SELECT sid, serial#, username, osuser, machine from V$SESSION where username is not NULL;

Static Data Dictionary Views can be listed along with comments in dict:
SELECT * from DICT order by table_name;


Popular posts from this blog

Creating Multiple VLANs over Bonding Interfaces with Proper Routing on a Centos Linux Host

In this post, I am going to explain configuring multiple VLANs on a bond interface. First and foremost, I would like to describe the environment and give details of the infrastructure. The server has 4 Ethernet links to a layer 3 switch with names: enp3s0f0, enp3s0f1, enp4s0f0, enp4s0f1 There are two bond interfaces both configured as active-backup bond0, bond1 enp4s0f0 and enp4s0f1 interfaces are bonded as bond0. Bond0 is for making ssh connections and management only so corresponding switch ports are not configured in trunk mode. enp3s0f0 and enp3s0f1 interfaces are bonded as bond1. Bond1 is for data and corresponding switch ports are configured in trunk mode. Bond0 is the default gateway for the server and has IP address Bond1 has three subinterfaces with VLAN 4, 36, 41. IP addresses are,, respectively. Proper communication with other servers on the network we should use routing tables. There are three

Sending Jboss Server Logs to Logstash Using Filebeat with Multiline Support

In addition to sending system logs to logstash, it is possible to add a prospector section to the filebeat.yml for jboss server logs. Sometimes jboss server.log has single events made up from several lines of messages. In such cases Filebeat should be configured for a multiline prospector. Filebeat takes lines do not start with a date pattern (look at pattern in the multiline section "^[[:digit:]]{4}-[[:digit:]]{2}-[[:digit:]]{2}" and negate section is set to true ) and combines them with the previous line that starts with a date pattern. server.log file excerpt where DatePattern: yyyy-MM-dd-HH and ConversionPattern: %d %-5p [%c] %m%n Logstash filter: