- select $σP(r)$
- project $ΠS(r)$
- rename $ρx(A_1,A_2,…,A_n)(r)$
- union
$r∪ s$ - difference
$r-s$ - cartesian-product
$r× s$
- intersection
$r∩ s = r-(r-s)$ - natual join $r\Join s = ΠR∪ S(σr.A_1=s.A_1 ∧ …(r× s))$
- theta join $r\Joinθs = σθ(r× s)$
- devision
$$temp1 ← ΠR-S(r)$$ $$temp2 ← ΠR-S((temp1× s) - ΠR-S,S(r))$$
$$result = temp1 - temp2$$ Student Task Peter Database1 Peter Database2 Peter Compiler1 Sara Database1 Sara Database2 Mary Database1 Mary Comipler1 Task Database1 Database2 Student Peter Sara - aggregation $group_columnζaggre\_func(column)(r)$
- delete
$r← r - E$ - insert
$r← r∪ E$ - update $r← ΠF_1,F_2,…,F_n(r)$
- select table owner:
SELECT owner,Table_name FROM all_tables WHERE owner='identified above' ORDER BY owner,Table_name ;
P & ALTER
CREATE DATABASE dbname;
USE DATABASE dbname;
CREATE TABLE table_name (column_name TYPE Constrains, ... );
DESC table_name;
DROP TABLE;
ALTER TABLE table_name
ADD COLUMN columnName ...
ADD PRIMARY KEY (columnName)
RENAME TO tableNewName
CHANGE COLUMN columnOldName columnNewName TYPE ...
MODIFY COLUMN columnName TYPE...
DROP COLUMN columnName
INSERT INTO tableName [(columnName1, columnName2, ...)] VALUES ('value1', 'value2', ...);
UPDATE tableName SET columnName1 = 'value1', columnName2 = 'value2' WHERE expr;
DELETE FROM tableName WHERE expr;
- REGEXP pattern
- IN (‘value1’, ‘value2’, …)
- columnName BETWEEN value1 and value2
Equivalent to “columnName > value1 and columnName < value2”
- NOT
- SHOW
SHOW CREATE TABLE tableName; SHOW COLUMNS FROM tableName; SHOW INDEX FROM tableName; SHOW WARNINGS;
- FIRST, LAST, BEFORE, AFTER, SECOND…
- CASE
UPDATE tableName SET columnName = hznl CASE WHEN column_1 = somevalue1 THEN newValue;
- ORDER BY
ORDER BY columnName [ASC/DESC]
- SUM, AVG, MAX, MIN, COUNT
- match with GROUP_BY
E.g:SUM(columnName) … GROUP BY columnName
- GROUP BY
remove the duplicates
- SELECT columnName1, columnName2 FROM tableName GROUP BY columnName2
- Having
The HAVING clause was added to SQL because the WHERE keyword could not be used with aggregate functions. E.g: HAVING count(columnName) > 5
- EXISTS, NOT EXISTS are always using in corelated subquery.
- UNION
- Suppress the duplicates by default. UNION ALL can keep the duplicates.
- WITH
Define temporary view
WITH temp_view_name(columnName...) as select statement SELECT ... FROM temp_view_name WHERE ...
- RECURSIVE
- CREAT RECURSIVE VIEW
- WITH RECURSIVE
WITH RECURSIVE empl(employee_name, manager_name) as ( SELECT employee_name, manager_name FROM manager UNION SELECT manager.employee_name,empl.manager_name FROM manager, empl WHERE manager.manager_name = empl.employee_name ) SELECT * FROM empl
- GRANT & REVOKE
- GRANT statement ON table TO who
- REVOKE statement ON table FROM who
- deadlock, ORA-00054: 资源正忙,要求指定 NOWAIT
select * from v$locked_object;
select object_name,machine,s.sid,s.serial#
from v$locked_object l,dba_objects o ,v$session s
where l.object_id = o.object_id and l.session_id=s.sid;
alter system kill session '83, 23941'
CHAR, VARCHAR, BLOB, INT, DEC, DATE, DATETIME
select * from table_a a
inner join * from table_b b
- Find the one thing need to be described.
- List necessary information about this thing.(Depends on how to use this table)
- Break down the information into pieces .
- Allow you to write fast queries.
- If you have a column containing values you don’t yet know.Iso NULL value.
- Make data less accessible.
- To store a large piece of data, like BLOB.
use junction table.
- Trivial
$\bar B \subseteq \bar A$ and$\bar A → \bar A ∪ \bar B$ elso. - Nontrivial
$\bar B \nsubseteq \bar A$ - Completely nontrivial
$\bar A ∩ \bar B = ∅$
When a non-key column is dependent on some, but not all, of the composite PK.
When any non-key column is dependent on any of the other non-key columns.
- Splitting rule
If
$\bar A → B_1, B_2$ , then$\bar A → B_1\ \bar A → B_2$ - Combining rule
If
$\bar A → B_1\ \bar A → B_2$ , then$\bar A → B_1, B_2$ - Transitive rule
If
$\bar A → \bar B$ and$\bar B → \bar C$ , then$\bar A → \bar C$
tuple | rest | ||
---|---|---|---|
t | |||
u | |||
v |
w |
- Trivial MVDs
- Nontrivial
otherwise.
MVD is a tuple-generating dependency.
- FD is a MVD
- Intersection rule
If
$(\bar A\twoheadrightarrow \bar B) ∧ (\bar A\twoheadrightarrow \bar C)$ , then$\bar A\twoheadrightarrow \bar B∩ \bar C$ . - Transitive rule
If
$(\bar A\twoheadrightarrow \bar B) ∧ (\bar B\twoheadrightarrow \bar C)$ , then$\bar A\twoheadrightarrow \bar C - \bar B$ .
- Rule 1:
food_name ingredients bread flour, milk, egg, yeast, oil salad lettuce, tomato, cucumber - Rule 2:
teacher student1 student2 Ms.Mary Joe Ron
- Rule 1: Be in 1NF
- Rule 2: Have no partial functional dependencies.
- Rule 1: Be in 2NF
- Rule 2: Have no transitive dependencies.
FD leads to the BCNF.
- Definition
- Definition
4NF is in BCNF.
Complete: Every object is in at least one subclass.
Overlapping: One object is in two+ subclasses.
3 choices:
- Subclass relations contain superclass key + specialized attrs
- Subclass relations contain all attributes
- One relation containing all superclass + subclass attrs
Examples:
- S(K, A), S1(K, B), S2(K, C)
- S(K, A), S1(K, A, B), S2(K, A, C)
- S(K, A, B, C)
Constrain allowable database states.(static)
Major keywords: PK, FK, UNIQUE, CHECK
- Examples:
Create ...
{
columnName type CHECK (columnName IN ('value1', 'value2'));
}
ADD CONSTRAINT CHECK columnName > 1;
CHECK 'A' = SUBSTRING(columnName, 1, 1);
- A FK can have different name than the parent key.
- FK values can be NULL.
- We can make sure a FK contains a meaningful value by using a constraint .
- The FK doesn’t have to be the primary key of the parent table, but it must be unique.
CREATE TABLE tableName
(
...
columnName TYPE NOT NULL,
[CONSTRAINT constraint_name,]
FOREIGN KEY (foreign_key_name)
REFERENCES parent_tableName (parent_columnName)
)
You can name constraint_name and foreign_key_name whatever you like.
- To enforce constraints(Dynamic)
- Move logic from apps into DBMS
- Event-Condition-Action Rules
When event occurs, check condition; if true, do action.
- Syntax
- events
- [For Each Row]
Determines whether the trigger is row-level or statement-level
- referencing-variables
- condition
In when or where clause depends on the SQL Implementation.
Different between full table scans and immediate location of tuples.
- Balanced trees (B tree, B+ tree)
When uses “>, <, >=, <=” in query.
- Hashtable
When uses “=” in query.
CREATE INDEX IndexName ON T(A1,A2...)
CREATE UNIQUE INDEX ...
DROP INDEX IndexName
- Extra space
- Index creation
- Index maintenance(Important)
When updates database, indexes will also be updated.
Benefits depends on:
- Data distributions
- Query vs. update load
- Size of table(and possibly layout)
- Input (database statistics and workload)
- Output (recommended indexes)
- Concurrent database access
- Resilience to system failures
- A(Atomicity)
Each transaction is “all-or-nothing”, never left half done.
- C(Consistency)
- I(Isolation)
- D(Durability)
- dirty data: written by an uncommitted transaction
- nonrepeatable reads: an item read multiple times cannot change values
T1: Update Student Set GPA=(1.1)*GPA T2.S1: Select AVG(GPA) From Student T2.S2: Select MAX(GPA) From Student
- phantoms
T1: Insert Into Student [100 new tuples] T2.S1: Select AVG(GPA) From Student T2.S2: Select MAX(GPA) From Student
levels | dirty reads | nonrepeatable reads | phantoms |
---|---|---|---|
Read Uncommitted | Y | Y | Y |
Read Committed | N | Y | Y |
Repeatable Read | N | N | Y |
Serializable | N | N | N |
- Standard default: Serializable
- Some systems have default Repeatable Read
SQL statements are the means by which programs and users access data in an Oracle database.
The sections that follow show each SQL statement and its related syntax. Refer to [Chapter 5, “Subclauses”](sqlqr05.htm#g1004143) for the syntax of the subclauses listed in the syntax for the statements.
See Also:
Oracle Database SQL Language Reference for detailed information about Oracle SQL
- [ALTER CLUSTER]
ALTER CLUSTER [ schema. ]cluster
{ physical_attributes_clause
| SIZE size_clause
| allocate_extent_clause
| deallocate_unused_clause
| { CACHE | NOCACHE }
} ...
[ parallel_clause ] ;
- [ALTER DATABASE]
ALTER DATABASE [ database ]
{ startup_clauses
| recovery_clauses
| database_file_clauses
| logfile_clauses
| controlfile_clauses
| standby_database_clauses
| default_settings_clauses
| instance_clauses
| security_clause
} ;
- [ALTER DATABASE LINK]
ALTER DATABASE LINK dblink
{ CONNECT TO user IDENTIFIED BY password [ dblink_authentication ]
| dblink_authentication
};
- [ALTER DIMENSION]
ALTER DIMENSION [ schema. ] dimension
{ ADD { level_clause
| hierarchy_clause
| attribute_clause
| extended_attribute_clause
}
} ...
|
{ DROP { LEVEL level [ RESTRICT | CASCADE ]
| HIERARCHY hierarchy
| ATTRIBUTE attribute [ LEVEL level [ COLUMN column ] ]...
}
} ...
|
COMPILE
;
- [ALTER DISKGROUP]
ALTER DISKGROUP
{ diskgroup_name
{ { { add_disk_clause | drop_disk_clause }
[, { add_disk_clause | drop_disk_clause } ]...
| resize_disk_clause
} [ rebalance_diskgroup_clause ]
| disk_online_clause
| disk_offline_clause
| rebalance_diskgroup_clause
| check_diskgroup_clause
| diskgroup_template_clauses
| diskgroup_directory_clauses
| diskgroup_alias_clauses
| diskgroup_volume_clauses
| diskgroup_attributes
| modify_diskgroup_file
| drop_diskgroup_file_clause
| usergroup_clauses
| user_clauses
| file_permissions_clause
| file_owner_clause
}
| { diskgroup_name [, diskgroup_name ] ...
| ALL
} { undrop_disk_clause
| diskgroup_availability
| enable_disable_volume
}
} ;
- [ALTER FLASHBACK ARCHIVE]
ALTER FLASHBACK ARCHIVE flashback_archive
{ SET DEFAULT
| { ADD | MODIFY } TABLESPACE tablespace [flashback_archive_quota]
| REMOVE TABLESPACE tablespace_name
| MODIFY RETENTION flashback_archive_retention
| PURGE { ALL | BEFORE { SCN expr | TIMESTAMP expr } }
| [NO] OPTIMIZE DATA
};
Note:
You can specify the `[NO]` `OPTIMIZE` `DATA` clause in this statement starting with Oracle Database 11g Release 2 (11.2.0.4).
- [ALTER FUNCTION]
ALTER FUNCTION [ schema. ] function function_compile_clause
- [ALTER INDEX]
ALTER INDEX [ schema. ]index
{ { deallocate_unused_clause
| allocate_extent_clause
| shrink_clause
| parallel_clause
| physical_attributes_clause
| logging_clause
} ...
| rebuild_clause
| PARAMETERS ( 'ODCI_parameters' )
)
| COMPILE
| { ENABLE | DISABLE }
| UNUSABLE
| VISIBLE | INVISIBLE
| RENAME TO new_name
| COALESCE
| { MONITORING | NOMONITORING } USAGE
| UPDATE BLOCK REFERENCES
| alter_index_partitioning
}
;
- [ALTER INDEXTYPE]
ALTER INDEXTYPE [ schema. ] indextype
{ { ADD | DROP } [ schema. ] operator ( parameter_types )
[ , { ADD | DROP } [schema. ] operator ( parameter_types ) ]... [ using_type_clause ]
| COMPILE
}
[ WITH LOCAL [ RANGE ] PARTITION ] [ storage_table_clause ]
;
- [ALTER JAVA]
ALTER JAVA
{ SOURCE | CLASS } [ schema. ]object_name
[ RESOLVER
( ( match_string [, ] { schema_name | - } )... )
]
{ { COMPILE | RESOLVE }
| invoker_rights_clause
} ;
- [ALTER LIBRARY]
ALTER LIBRARY [ schema. ] library_name library_compile_clause
- [ALTER MATERIALIZED VIEW]
ALTER MATERIALIZED VIEW
[ schema. ] materialized_view
[ physical_attributes_clause
| modify_mv_column_clause
| table_compression
| LOB_storage_clause [, LOB_storage_clause ]...
| modify_LOB_storage_clause [, modify_LOB_storage_clause ]...
| alter_table_partitioning
| parallel_clause
| logging_clause
| allocate_extent_clause
| deallocate_unused_clause
| shrink_clause
| { CACHE | NOCACHE }
]
[ alter_iot_clauses ]
[ USING INDEX physical_attributes_clause ]
[ MODIFY scoped_table_ref_constraint
| alter_mv_refresh
]
[ { ENABLE | DISABLE } QUERY REWRITE
| COMPILE
| CONSIDER FRESH
] ;
- [ALTER MATERIALIZED VIEW LOG]
ALTER MATERIALIZED VIEW LOG [ FORCE ]
ON [ schema. ]table
[ physical_attributes_clause
| add_mv_log_column_clause
| alter_table_partitioning
| parallel_clause
| logging_clause
| allocate_extent_clause
| shrink_clause
| move_mv_log_clause
| { CACHE | NOCACHE }
] [ mv_log_augmentation ] [ mv_log_purge_clause ]
;
- [ALTER OPERATOR]
ALTER OPERATOR [ schema. ] operator
{ add_binding_clause
| drop_binding_clause
| COMPILE
} ;
- [ALTER OUTLINE]
ALTER OUTLINE [ PUBLIC | PRIVATE ] outline
{ REBUILD
| RENAME TO new_outline_name
| CHANGE CATEGORY TO new_category_name
| { ENABLE | DISABLE }
} ...
;
- [ALTER PACKAGE]
ALTER PACKAGE [ schema. ] package package_compile_clause
- [ALTER PROCEDURE]
ALTER PROCEDURE [ schema. ] procedure procedure_compile_clause
- [ALTER PROFILE]
ALTER PROFILE profile LIMIT
{ resource_parameters | password_parameters } ...
;
- [ALTER RESOURCE COST]
ALTER RESOURCE COST
{ { CPU_PER_SESSION
| CONNECT_TIME
| LOGICAL_READS_PER_SESSION
| PRIVATE_SGA
} integer
} ...
;
- [ALTER ROLE]
ALTER ROLE role
{ NOT IDENTIFIED
| IDENTIFIED
{ BY password
| USING [ schema. ] package
| EXTERNALLY
| GLOBALLY
}
} ;
- [ALTER ROLLBACK SEGMENT]
ALTER ROLLBACK SEGMENT rollback_segment
{ ONLINE
| OFFLINE
| storage_clause
| SHRINK [ TO size_clause ]
};
- [ALTER SEQUENCE]
ALTER SEQUENCE [ schema. ] sequence
{ INCREMENT BY integer
| { MAXVALUE integer | NOMAXVALUE }
| { MINVALUE integer | NOMINVALUE }
| { CYCLE | NOCYCLE }
| { CACHE integer | NOCACHE }
| { ORDER | NOORDER }
} ...
;
- [ALTER SESSION]
ALTER SESSION
{ ADVISE { COMMIT | ROLLBACK | NOTHING }
| CLOSE DATABASE LINK dblink
| { ENABLE | DISABLE } COMMIT IN PROCEDURE
| { ENABLE | DISABLE } GUARD
| { ENABLE | DISABLE | FORCE } PARALLEL
{ DML | DDL | QUERY } [ PARALLEL integer ]
| { ENABLE RESUMABLE [ TIMEOUT integer ] [ NAME string ]
| DISABLE RESUMABLE
}
| SYNC WITH PRIMARY
| alter_session_set_clause
} ;
- [ALTER SYSTEM]
ALTER SYSTEM
{ archive_log_clause
| checkpoint_clause
| check_datafiles_clause
| distributed_recov_clauses
| FLUSH { SHARED_POOL | GLOBAL CONTEXT | BUFFER_CACHE
| REDO TO target_db_name [ [ NO ] CONFIRM APPLY ] }
| end_session_clauses
| SWITCH LOGFILE
| { SUSPEND | RESUME }
| quiesce_clauses
| rolling_migration_clauses
| security_clauses
| shutdown_dispatcher_clause
| REGISTER
| SET alter_system_set_clause
[ alter_system_set_clause ]...
| RESET alter_system_reset_clause
[ alter_system_reset_clause ]...
} ;
- [ALTER TABLE]
ALTER TABLE [ schema. ] table
[ alter_table_properties
| column_clauses
| constraint_clauses
| alter_table_partitioning
| alter_external_table
| move_table_clause
]
[ enable_disable_clause
| { ENABLE | DISABLE } { TABLE LOCK | ALL TRIGGERS }
] ...
;
- [ALTER TABLESPACE]
ALTER TABLESPACE tablespace
{ DEFAULT [ table_compression ] [ storage_clause ]
| MINIMUM EXTENT size_clause
| RESIZE size_clause
| COALESCE
| SHRINK SPACE [ KEEP size_clause]
| RENAME TO new_tablespace_name
| { BEGIN | END } BACKUP
| datafile_tempfile_clauses
| tablespace_logging_clauses
| tablespace_group_clause
| tablespace_state_clauses
| autoextend_clause
| flashback_mode_clause
| tablespace_retention_clause
} ;
- [ALTER TRIGGER]
ALTER TRIGGER [ schema. ] trigger
{ ENABLE
| DISABLE
| RENAME TO new_name
| trigger_compile_clause
} ;
- [ALTER TYPE]
ALTER TYPE [ schema. ]type alter_type_clauses
- [ALTER USER]
ALTER USER
{ user
{ IDENTIFIED
{ BY password [ REPLACE old_password ]
| EXTERNALLY [ AS 'certificate_DN' | AS 'kerberos_principal_name' ]
| GLOBALLY [ AS '[directory_DN]' ]
}
| DEFAULT TABLESPACE tablespace
| TEMPORARY TABLESPACE { tablespace | tablespace_group_name }
| { QUOTA { size_clause
| UNLIMITED
} ON tablespace
} ...
| PROFILE profile
| DEFAULT ROLE { role [, role ]...
| ALL [ EXCEPT role [, role ] ... ]
| NONE
}
| PASSWORD EXPIRE
| ACCOUNT { LOCK | UNLOCK }
| ENABLE EDITIONS [ FORCE ]
} ...
| user [, user ]... proxy_clause
} ;
- [ALTER VIEW]
ALTER VIEW [ schema. ] view
{ ADD out_of_line_constraint
| MODIFY CONSTRAINT constraint
{ RELY | NORELY }
| DROP { CONSTRAINT constraint
| PRIMARY KEY
| UNIQUE (column [, column ]...)
}
| COMPILE
| { READ ONLY | READ WRITE }
} ;
- [ANALYZE]
ANALYZE
{ { TABLE [ schema. ] table
| INDEX [ schema. ] index
} [ partition_extension_clause ]
| CLUSTER [ schema. ] cluster
}
{ validation_clauses
| LIST CHAINED ROWS [ into_clause ]
| DELETE [ SYSTEM ] STATISTICS
} ;
- [ASSOCIATE STATISTICS]
ASSOCIATE STATISTICS WITH
{ column_association | function_association }
[ storage_table_clause ] ;
- [AUDIT]
AUDIT
{ audit_operation_clause [ auditing_by_clause | IN SESSION CURRENT ]
| audit_schema_object_clause
| NETWORK
} [ BY { SESSION | ACCESS } ]
[ WHENEVER [ NOT ] SUCCESSFUL ]
;
- [CALL]
CALL
{ routine_clause
| object_access_expression
}
[ INTO :host_variable
[ [ INDICATOR ] :indicator_variable ] ] ;
- [COMMENT]
COMMENT ON
{ COLUMN [ schema. ]
{ table. | view. | materialized_view. } column
| EDITION edition_name
| INDEXTYPE [ schema. ] indextype
| MATERIALIZED VIEW materialized_view
| MINING MODEL [ schema. ] model
| OPERATOR [ schema. ] operator
| TABLE [ schema. ] { table | view }
}
IS string ;
- [COMMIT]
COMMIT [ WORK ]
[ [ COMMENT string ]
| [ WRITE [ WAIT | NOWAIT ] [ IMMEDIATE | BATCH ]
]
| FORCE string [, integer ]
] ;
- [CREATE CLUSTER]
CREATE CLUSTER [ schema. ] cluster
(column datatype [ SORT ]
[, column datatype [ SORT ] ]...
)
[ { physical_attributes_clause
| SIZE size_clause
| TABLESPACE tablespace
| { INDEX
| [ SINGLE TABLE ]
HASHKEYS integer [ HASH IS expr ]
}
}...
]
[ parallel_clause ]
[ NOROWDEPENDENCIES | ROWDEPENDENCIES ]
[ CACHE | NOCACHE ] ;
- [CREATE CONTEXT]
CREATE [ OR REPLACE ] CONTEXT namespace
USING [ schema. ] package
[ INITIALIZED { EXTERNALLY | GLOBALLY }
| ACCESSED GLOBALLY
] ;
- [CREATE CONTROLFILE]
CREATE CONTROLFILE
[ REUSE ] [ SET ] DATABASE database
[ logfile_clause ]
{ RESETLOGS | NORESETLOGS }
[ DATAFILE file_specification
[, file_specification ]... ]
[ MAXLOGFILES integer
| MAXLOGMEMBERS integer
| MAXLOGHISTORY integer
| MAXDATAFILES integer
| MAXINSTANCES integer
| { ARCHIVELOG | NOARCHIVELOG }
| FORCE LOGGING
]...
[ character_set_clause ] ;
- [CREATE DATABASE]
CREATE DATABASE [ database ]
{ USER SYS IDENTIFIED BY password
| USER SYSTEM IDENTIFIED BY password
| CONTROLFILE REUSE
| MAXDATAFILES integer
| MAXINSTANCES integer
| CHARACTER SET charset
| NATIONAL CHARACTER SET charset
| SET DEFAULT
{ BIGFILE | SMALLFILE } TABLESPACE
| database_logging_clauses
| tablespace_clauses
| set_time_zone_clause
}... ;
- [CREATE DATABASE LINK]
CREATE [ SHARED ] [ PUBLIC ] DATABASE LINK dblink
[ CONNECT TO
{ CURRENT_USER
| user IDENTIFIED BY password [ dblink_authentication ]
}
| dblink_authentication
]...
[ USING connect_string ] ;
- [CREATE DIMENSION]
CREATE DIMENSION [ schema. ] dimension
level_clause ...
{ hierarchy_clause
| attribute_clause
| extended_attribute_clause
}...
;
- [CREATE DIRECTORY]
CREATE [ OR REPLACE ] DIRECTORY directory
AS 'path_name' ;
- [CREATE DISKGROUP]
CREATE DISKGROUP diskgroup_name
[ { HIGH | NORMAL | EXTERNAL } REDUNDANCY ]
{ [ QUORUM | REGULAR ][ FAILGROUP failgroup_name ]
DISK qualified_disk_clause [, qualified_disk_clause]...
} ...
[ ATTRIBUTE { 'attribute_name' = 'attribute_value' }
[, 'attribute_name' = 'attribute_value' ]... ]
;
- [CREATE EDITION]
CREATE EDITION edition
[ AS CHILD OF parent_edition ] ;
- [CREATE FLASHBACK ARCHIVE]
CREATE FLASHBACK ARCHIVE [DEFAULT] flashback_archive
TABLESPACE tablespace
[flashback_archive_quota]
[ [NO] OPTIMIZE DATA ]
flashback_archive_retention
;
Note:
You can specify the `[NO]` `OPTIMIZE` `DATA` clause in this statement starting with Oracle Database 11g Release 2 (11.2.0.4).
- [CREATE FUNCTION]
CREATE [ OR REPLACE ] FUNCTION plsql_source
- [CREATE INDEX]
CREATE [ UNIQUE | BITMAP ] INDEX [ schema. ] index
ON { cluster_index_clause
| table_index_clause
| bitmap_join_index_clause
}
[ UNUSABLE ] ;
- [CREATE INDEXTYPE]
CREATE [ OR REPLACE ] INDEXTYPE [ schema. ] indextype
FOR [ schema. ] operator (paramater_type [, paramater_type ]...)
[, [ schema. ] operator (paramater_type [, paramater_type ]...)
]...
using_type_clause
[WITH LOCAL [RANGE] PARTITION ]
[ storage_table_clause ]
;
- [CREATE JAVA]
CREATE [ OR REPLACE ] [ AND { RESOLVE | COMPILE } ] [ NOFORCE ]
JAVA { { SOURCE | RESOURCE } NAMED [ schema. ] primary_name
| CLASS [ SCHEMA schema ]
}
[ invoker_rights_clause ]
[ RESOLVER ( (match_string [,] { schema_name | - })...) ]
{ USING { BFILE (directory_object_name, server_file_name)
| { CLOB | BLOB | BFILE } subquery
| 'key_for_BLOB'
}
| AS source_char
} ;
- [CREATE LIBRARY]
CREATE [ OR REPLACE ] LIBRARY plsql_source
- [CREATE MATERIALIZED VIEW]
CREATE MATERIALIZED VIEW [ schema. ] materialized_view
[ OF [ schema. ] object_type ]
[ ( { scoped_table_ref_constraint
| column_alias [ENCRYPT [encryption_spec]]
}
[, { scoped_table_ref_constraint
| column_alias [ENCRYPT [encryption_spec]]
}
]...
)
]
{ ON PREBUILT TABLE
[ { WITH | WITHOUT } REDUCED PRECISION ]
| physical_properties materialized_view_props
}
[ USING INDEX
[ physical_attributes_clause
| TABLESPACE tablespace
]...
| USING NO INDEX
]
[ create_mv_refresh ]
[ FOR UPDATE ]
[ { DISABLE | ENABLE } QUERY REWRITE ]
AS subquery ;
- [CREATE MATERIALIZED VIEW LOG]
CREATE MATERIALIZED VIEW LOG ON [ schema. ] table
[ physical_attributes_clause
| TABLESPACE tablespace
| logging_clause
| { CACHE | NOCACHE }
]...
[ parallel_clause ]
[ table_partitioning_clauses ]
[ WITH [ { OBJECT ID
| PRIMARY KEY
| ROWID
| SEQUENCE
| COMMIT SCN
}
[ { , OBJECT ID
| , PRIMARY KEY
| , ROWID
| , SEQUENCE
| , COMMIT SCN
}
]... ]
(column [, column ]...)
[ new_values_clause ]
] [ mv_log_purge_clause ]
;
- [CREATE OPERATOR]
CREATE [ OR REPLACE ] OPERATOR
[ schema. ] operator binding_clause ;
- [CREATE OUTLINE]
CREATE [ OR REPLACE ]
[ PUBLIC | PRIVATE ] OUTLINE [ outline ]
[ FROM [ PUBLIC | PRIVATE ] source_outline ]
[ FOR CATEGORY category ]
[ ON statement ] ;
- [CREATE PACKAGE]
CREATE [ OR REPLACE ] PACKAGE plsql_source
- [CREATE PACKAGE BODY]
CREATE [ OR REPLACE ] PACKAGE BODY plsql_source
- [CREATE PFILE]
CREATE PFILE [= 'pfile_name' ]
FROM { SPFILE [= 'spfile_name']
| MEMORY
} ;
- [CREATE PROCEDURE]
CREATE [ OR REPLACE ] PROCEDURE plsql_source
- [CREATE PROFILE]
CREATE PROFILE profile
LIMIT { resource_parameters
| password_parameters
}...
;
- [CREATE RESTORE POINT]
CREATE RESTORE POINT restore_point
[ AS OF {TIMESTAMP | SCN} expr ]
[ PRESERVE
| GUARANTEE FLASHBACK DATABASE
];
- [CREATE ROLE]
CREATE ROLE role
[ NOT IDENTIFIED
| IDENTIFIED { BY password
| USING [ schema. ] package
| EXTERNALLY
| GLOBALLY
}
] ;
- [CREATE ROLLBACK SEGMENT]
CREATE [ PUBLIC ] ROLLBACK SEGMENT rollback_segment
[ TABLESPACE tablespace | storage_clause ]...];
- [CREATE SCHEMA]
CREATE SCHEMA AUTHORIZATION schema
{ create_table_statement
| create_view_statement
| grant_statement
}...
;
- [CREATE SEQUENCE]
CREATE SEQUENCE [ schema. ] sequence
[ { INCREMENT BY | START WITH } integer
| { MAXVALUE integer | NOMAXVALUE }
| { MINVALUE integer | NOMINVALUE }
| { CYCLE | NOCYCLE }
| { CACHE integer | NOCACHE }
| { ORDER | NOORDER }
]...
;
- [CREATE SPFILE]
CREATE SPFILE [= 'spfile_name' ]
FROM { PFILE [= 'pfile_name' ]
| MEMORY
} ;
- [CREATE SYNONYM]
CREATE [ OR REPLACE ] [ PUBLIC ] SYNONYM
[ schema. ] synonym
FOR [ schema. ] object [ @ dblink ] ;
- [CREATE TABLE]
BEGIN
EXECUTE IMMEDIATE 'DROP TABLE daily';
EXCEPTION
WHEN OTHERS THEN NULL;
END;
/
/*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE daily (
id number(10) check (id > 0) NOT NULL,
title varchar2(128) DEFAULT NULL,
user_id varchar2(45) DEFAULT NULL,
createtime varchar2(45) DEFAULT NULL,
imgurl clob,
body clob,
url varchar2(256) DEFAULT NULL,
status number(3) DEFAULT '0',
rank number(3) DEFAULT '0' check (rank > 0),
PRIMARY KEY (id)
) ;
-- Generate ID using sequence and trigger
CREATE SEQUENCE daily_seq START WITH 1 INCREMENT BY 1;
CREATE OR REPLACE TRIGGER daily_seq_tr
BEFORE INSERT ON daily FOR EACH ROW
WHEN (NEW.id IS NULL)
BEGIN
SELECT daily_seq.NEXTVAL INTO :NEW.id FROM DUAL;
END;
/
CREATE INDEX dailytitle ON daily (title);
CREATE [ GLOBAL TEMPORARY ] TABLE [ schema. ] table
{ relational_table | object_table | XMLType_table } ;
check sql which created this table in PLSQL: right click table -> 然后右键“DBMS_元数据”“DDL”
- [CREATE TABLESPACE]
CREATE
[ BIGFILE | SMALLFILE ]
{ permanent_tablespace_clause
| temporary_tablespace_clause
| undo_tablespace_clause
} ;
- [CREATE TRIGGER]
CREATE [ OR REPLACE ] TRIGGER plsql_source
- [CREATE TYPE]
CREATE [OR REPLACE] TYPE plsql_source
- [CREATE TYPE BODY]
CREATE [ OR REPLACE ] TYPE BODY plsql_source
- [CREATE USER]
CREATE USER user
IDENTIFIED { BY password
| EXTERNALLY [ AS 'certificate_DN' | AS 'kerberos_principal_name' ]
| GLOBALLY [ AS '[ directory_DN ]' ]
}
[ DEFAULT TABLESPACE tablespace
| TEMPORARY TABLESPACE
{ tablespace | tablespace_group_name }
| { QUOTA { size_clause | UNLIMITED } ON tablespace }...
| PROFILE profile
| PASSWORD EXPIRE
| ACCOUNT { LOCK | UNLOCK }
[ DEFAULT TABLESPACE tablespace
| TEMPORARY TABLESPACE
{ tablespace | tablespace_group_name }
| { QUOTA { size_clause | UNLIMITED } ON tablespace }...
| PROFILE profile
| PASSWORD EXPIRE
| ACCOUNT { LOCK | UNLOCK }
| ENABLE EDITIONS
]...
] ;
- [CREATE VIEW]
CREATE [OR REPLACE]
[[NO] FORCE] [EDITIONING] VIEW [schema.] view
[ ( { alias [ inline_constraint... ]
| out_of_line_constraint
}
[, { alias [ inline_constraint...]
| out_of_line_constraint
}
]
)
| object_view_clause
| XMLType_view_clause
]
AS subquery [ subquery_restriction_clause ] ;
- [DELETE]
DELETE [ hint ]
[ FROM ]
{ dml_table_expression_clause
| ONLY (dml_table_expression_clause)
} [ t_alias ]
[ where_clause ]
[ returning_clause ]
[error_logging_clause];
- [DISASSOCIATE STATISTICS]
DISASSOCIATE STATISTICS FROM
{ COLUMNS [ schema. ]table.column
[, [ schema. ]table.column ]...
| FUNCTIONS [ schema. ]function
[, [ schema. ]function ]...
| PACKAGES [ schema. ]package
[, [ schema. ]package ]...
| TYPES [ schema. ]type
[, [ schema. ]type ]...
| INDEXES [ schema. ]index
[, [ schema. ]index ]...
| INDEXTYPES [ schema. ]indextype
[, [ schema. ]indextype ]...
}
[ FORCE ] ;
- [DROP CLUSTER]
DROP CLUSTER [ schema. ] cluster
[ INCLUDING TABLES [ CASCADE CONSTRAINTS ] ] ;
- [DROP CONTEXT]
DROP CONTEXT namespace ;
- [DROP DATABASE]
DROP DATABASE ;
- [DROP DATABASE LINK]
DROP [ PUBLIC ] DATABASE LINK dblink ;
- [DROP DIMENSION]
DROP DIMENSION [ schema. ] dimension ;
- [DROP DIRECTORY]
DROP DIRECTORY directory_name ;
- [DROP DISKGROUP]
DROP DISKGROUP diskgroup_name
[ FORCE INCLUDING CONTENTS
| { INCLUDING | EXCLUDING } CONTENTS
];
- [DROP EDITION]
DROP EDITION edition [CASCADE];
- [DROP FLASHBACK ARCHIVE]
DROP FLASHBACK ARCHIVE flashback_archive;
- [DROP FUNCTION]
DROP FUNCTION [ schema. ] function_name ;
- [DROP INDEX]
DROP INDEX [ schema. ] index [ FORCE ] ;
- [DROP INDEXTYPE]
DROP INDEXTYPE [ schema. ] indextype [ FORCE ] ;
- [DROP JAVA]
DROP JAVA { SOURCE | CLASS | RESOURCE }
[ schema. ] object_name ;
- [DROP LIBRARY]
DROP LIBRARY library_name ;
- [DROP MATERIALIZED VIEW]
DROP MATERIALIZED VIEW [ schema. ] materialized_view
[ PRESERVE TABLE ] ;
- [DROP MATERIALIZED VIEW LOG]
DROP MATERIALIZED VIEW LOG ON [ schema. ] table ;
- [DROP OPERATOR]
DROP OPERATOR [ schema. ] operator [ FORCE ] ;
- [DROP OUTLINE]
DROP OUTLINE outline ;
- [DROP PACKAGE]
DROP PACKAGE [ BODY ] [ schema. ] package ;
- [DROP PROCEDURE]
DROP PROCEDURE [ schema. ] procedure ;
- [DROP PROFILE]
DROP PROFILE profile [ CASCADE ] ;
- [DROP RESTORE POINT]
DROP RESTORE POINT restore_point ;
- [DROP ROLE]
DROP ROLE role ;
- [DROP ROLLBACK SEGMENT]
DROP ROLLBACK SEGMENT rollback_segment ;
- [DROP SEQUENCE]
DROP SEQUENCE [ schema. ] sequence_name ;
- [DROP SYNONYM]
DROP [PUBLIC] SYNONYM [ schema. ] synonym [FORCE] ;
- [DROP TABLE]
DROP TABLE [ schema. ] table
[ CASCADE CONSTRAINTS ] [ PURGE ] ;
drop table if exists
BEGIN
EXECUTE IMMEDIATE 'DROP TABLE ' || table_name;
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE != -942 THEN
RAISE;
END IF;
END;
- [DROP TABLESPACE]
DROP TABLESPACE tablespace
[ INCLUDING CONTENTS [ {AND | KEEP} DATAFILES ]
[ CASCADE CONSTRAINTS ]
] ;
- [DROP TRIGGER]
DROP TRIGGER [ schema. ] trigger ;
- [DROP TYPE]
DROP TYPE [ schema. ] type_name [ FORCE | VALIDATE ] ;
- [DROP TYPE BODY]
DROP TYPE BODY [ schema. ] type_name ;
- [DROP USER]
DROP USER user [ CASCADE ] ;
- [DROP VIEW]
DROP VIEW [ schema. ] view [ CASCADE CONSTRAINTS ] ;
- [EXPLAIN PLAN]
EXPLAIN PLAN
[ SET STATEMENT_ID = string ]
[ INTO [ schema. ] table [ @ dblink ] ]
FOR statement ;
- [FLASHBACK DATABASE]
FLASHBACK [ STANDBY ] DATABASE [ database ]
{ TO { { SCN | TIMESTAMP } expr
| RESTORE POINT restore_point
}
| TO BEFORE { SCN | TIMESTAMP} expr
| RESETLOGS
}
} ;
- [FLASHBACK TABLE]
FLASHBACK TABLE
[ schema. ] table
[, [ schema. ] table ]...
TO { { { SCN | TIMESTAMP } expr
| RESTORE POINT restore_point
} [ { ENABLE | DISABLE } TRIGGERS ]
| BEFORE DROP [ RENAME TO table ]
} ;
- [GRANT]
GRANT { grant_system_privileges
| grant_object_privileges
} ;
- [INSERT]
INSERT [ hint ]
{ single_table_insert | multi_table_insert } ;
- [LOCK TABLE]
LOCK TABLE [ schema. ] { table | view }
[ partition_extension_clause
| @ dblink
] [, [ schema. ] { table | view }
[ partition_extension_clause
| @ dblink
]
]...
IN lockmode MODE
[ NOWAIT
| WAIT integer
] ;
- [MERGE]
MERGE [ hint ]
INTO [ schema. ] { table | view } [ t_alias ]
USING { [ schema. ] { table | view }
| subquery
} [ t_alias ]
ON ( condition )
[ merge_update_clause ]
[ merge_insert_clause ]
[ error_logging_clause ] ;
- [NOAUDIT]
NOAUDIT
{ audit_operation_clause [ auditing_by_clause ]
| audit_schema_object_clause
| NETWORK
}
[ WHENEVER [ NOT ] SUCCESSFUL ] ;
- [PURGE]
PURGE { { TABLE table | INDEX index }
| { RECYCLEBIN | DBA_RECYCLEBIN }
| TABLESPACE tablespace [ USER username ]
} ;
- [RENAME]
RENAME old_name TO new_name ;
- [REVOKE]
REVOKE { revoke_system_privileges
| revoke_object_privileges
} ;
- [ROLLBACK]
ROLLBACK [ WORK ]
[ TO [ SAVEPOINT ] savepoint
| FORCE string
] ;
- [SAVEPOINT]
SAVEPOINT savepoint ;
- [SELECT]
subquery [ for_update_clause ] ;
- [SET CONSTRAINT[S]]
SET { CONSTRAINT | CONSTRAINTS }
{ constraint [, constraint ]...
| ALL
}
{ IMMEDIATE | DEFERRED } ;
- [SET ROLE]
SET ROLE
{ role [ IDENTIFIED BY password ]
[, role [ IDENTIFIED BY password ] ]...
| ALL [ EXCEPT role [, role ]... ]
| NONE
} ;
- [SET TRANSACTION]
SET TRANSACTION
{ { READ { ONLY | WRITE }
| ISOLATION LEVEL
{ SERIALIZABLE | READ COMMITTED }
| USE ROLLBACK SEGMENT rollback_segment
} [ NAME string ]
| NAME string
} ;
- [TRUNCATE_CLUSTER]
TRUNCATE CLUSTER [schema.] cluster
[ {DROP | REUSE} STORAGE ] ;
- [TRUNCATE_TABLE]
TRUNCATE TABLE [schema.] table
[ {PRESERVE | PURGE} MATERIALIZED VIEW LOG ]
[ {DROP [ ALL ] | REUSE} STORAGE ] ;
Note:
You can specify the `ALL` keyword in this statement starting with Oracle Database 11g Release 2 (11.2.0.2).
- [UPDATE]
UPDATE [ hint ]
{ dml_table_expression_clause
| ONLY (dml_table_expression_clause)
} [ t_alias ]
update_set_clause
[ where_clause ]
[ returning_clause ]
[error_logging_clause] ;
A function is a command that manipulates data items and returns a single value.
The sections that follow show each SQL function and its related syntax. Refer to [Chapter 5, “Subclauses”](sqlqr05.htm#g1004143) for the syntax of the subclauses.
- [ABS]
ABS(n)
- [ACOS]
ACOS(n)
- [ADD_MONTHS]
ADD_MONTHS(date, integer)
- [aggregate_function]
Aggregate functions return a single result row based on groups of rows, rather than on single rows.
- [analytic_function]
analytic_function([ arguments ])
OVER (analytic_clause)
- [APPENDCHILDXML]
APPENDCHILDXML
( XMLType_instance, XPath_string, value_expr [, namespace_string ])
- [ASCII]
ASCII(char)
- [ASCIISTR]
ASCIISTR(char)
- [ASIN]
ASIN(n)
- [ATAN]
ATAN(n)
- [ATAN2]
ATAN2(n1 , n2)
- [AVG]
AVG([ DISTINCT | ALL ] expr) [ OVER(analytic_clause) ]
- [BFILENAME]
BFILENAME('directory', 'filename')
- [BIN_TO_NUM]
BIN_TO_NUM(expr [, expr ]... )
- [BITAND]
BITAND(expr1, expr2)
- [CARDINALITY]
CARDINALITY(nested_table)
- [CAST]
CAST({ expr | MULTISET (subquery) } AS type_name)
- [CEIL]
CEIL(n)
- [CHARTOROWID]
CHARTOROWID(char)
- [CHR]
CHR(n [ USING NCHAR_CS ])
- [CLUSTER_ID]
CLUSTER_ID ( [ schema . ] model mining_attribute_clause )
- [CLUSTER_PROBABILITY]
CLUSTER_PROBABILITY ( [ schema . ] model
[ , cluster_id ] mining_attribute_clause )
- [CLUSTER_SET]
CLUSTER_SET ( [ schema . ] model [ , topN [ , cutoff ] ] mining_attribute_clause )
- [COALESCE]
COALESCE(expr [, expr ]...)
- [COLLECT]
COLLECT( [ DISTINCT | UNIQUE ] column [ ORDER BY expr ] )
- [COMPOSE]
COMPOSE(char)
- [CONCAT]
CONCAT(char1, char2)
- [CONVERT]
CONVERT(char, dest_char_set[, source_char_set ])
- [CORR]
CORR(expr1, expr2) [ OVER (analytic_clause) ]
- [CORR_K], <a id=”sthref326”></a><a id=”sthref327”></a>[CORR_S]
{ CORR_K | CORR_S }
(expr1, expr2
[, { COEFFICIENT
| ONE_SIDED_SIG
| ONE_SIDED_SIG_POS
| ONE_SIDED_SIG_NEG
| TWO_SIDED_SIG
}
]
)
- [COS]
COS(n)
- [COSH]
COSH(n)
- [COUNT]
COUNT({ * | [ DISTINCT | ALL ] expr }) [ OVER (analytic_clause) ]
- [COVAR_POP]
COVAR_POP(expr1, expr2)
[ OVER (analytic_clause) ]
- [COVAR_SAMP]
COVAR_SAMP(expr1, expr2) [ OVER (analytic_clause) ]
- [CUBE_TABLE]
CUBE_TABLE
( ' { schema.cube [ {HIERARCHY | HRR} dimension hierarchy ]...
| schema.dimension [ {HIERARCHY | HRR} [dimension] hierarchy ]
}
'
)
- [CUME_DIST (aggregate)]
CUME_DIST(expr[,expr ]...) WITHIN GROUP
(ORDER BY expr [ DESC | ASC ]
[ NULLS { FIRST | LAST } ]
[, expr [ DESC | ASC ]
[ NULLS { FIRST | LAST } ]
]...
)
- [CUME_DIST (analytic)]
CUME_DIST() OVER ([ query_partition_clause ] order_by_clause)
- [CURRENT_DATE]
CURRENT_DATE
- [CURRENT_TIMESTAMP]
CURRENT_TIMESTAMP [ (precision) ]
- [CV]
CV([ dimension_column ])
- [DATAOBJ_TO_PARTITION]
DATAOBJ_TO_PARTITION( table, partition_id )
- [DBTIMEZONE]
DBTIMEZONE
- [DECODE]
DECODE(expr, search, result [, search, result ]... [, default ])
- [DECOMPOSE]
DECOMPOSE( string [, { 'CANONICAL' | 'COMPATIBILITY' } ] )
- [DELETEXML]
DELETEXML( XMLType_instance, XPath_string [, namespace_string ])
- [DENSE_RANK (aggregate)]
DENSE_RANK(expr [, expr ]...) WITHIN GROUP
(ORDER BY expr [ DESC | ASC ]
[ NULLS { FIRST | LAST } ]
[,expr [ DESC | ASC ]
[ NULLS { FIRST | LAST } ]
]...
)
- [DENSE_RANK (analytic)]
DENSE_RANK( ) OVER([ query_partition_clause ] order_by_clause)
- [DEPTH]
DEPTH(correlation_integer)
- [DEREF]
DEREF(expr)
- [DUMP]
DUMP(expr[, return_fmt [, start_position [, length ] ]])
- [EMPTY_BLOB], <a id=”sthref372”></a><a id=”sthref373”></a>[EMPTY_CLOB]
{ EMPTY_BLOB | EMPTY_CLOB }( )
- [EXISTSNODE]
EXISTSNODE
(XMLType_instance, XPath_string
[, namespace_string ]
)
- [EXP]
EXP(n)
- [EXTRACT (datetime)]
EXTRACT( { YEAR
| MONTH
| DAY
| HOUR
| MINUTE
| SECOND
| TIMEZONE_HOUR
| TIMEZONE_MINUTE
| TIMEZONE_REGION
| TIMEZONE_ABBR
}
FROM { expr }
)
- [EXTRACT (XML)]
EXTRACT(XMLType_instance, XPath_string [, namespace_string ])
- [EXTRACTVALUE]
EXTRACTVALUE(XMLType_instance, XPath_string [, namespace_string ])
- [FEATURE_ID]
FEATURE_ID( [ schema . ] model mining_attribute_clause )
- [FEATURE_SET]
FEATURE_SET( [ schema . ] model [, topN [, cutoff ]] mining_attribute_clause )
- [FEATURE_VALUE]
FEATURE_VALUE( [ schema . ] model [, feature_id ] mining_attribute_clause )
- [FIRST]
get the first row from groups:
select person_id, clinic_type as HOSP_TYPE_PREFERENCE,
from (select person_id, clinic_type, count(*) cnt, max(count(*)) over (partition by person_id) max_count
from t_first_choice_hosp_type
group by person_id, clinic_type)
where cnt = max_count
aggregate_function
KEEP
(DENSE_RANK FIRST ORDER BY
expr [ DESC | ASC ]
[ NULLS { FIRST | LAST } ]
[, expr [ DESC | ASC ]
[ NULLS { FIRST | LAST } ]
]...
)
[ OVER ( [query_partition_clause] ) ]
- [FIRST_VALUE]
FIRST_VALUE
{ (expr) [ {RESPECT | IGNORE} NULLS ]
| (expr [ {RESPECT | IGNORE} NULLS ])
}
OVER (analytic_clause)
- [FLOOR]
FLOOR(n)
- [FROM_TZ]
FROM_TZ (timestamp_value, time_zone_value)
- [GREATEST]
GREATEST(expr [, expr ]...)
- [GROUP_ID]
GROUP_ID( )
- [GROUPING]
GROUPING(expr)
- [GROUPING_ID]
GROUPING_ID(expr [, expr ]...)
- [HEXTORAW]
HEXTORAW(char)
- [INITCAP]
INITCAP(char)
- [INSERTCHILDXML]
INSERTCHILDXML
( XMLType_instance, XPath_string, child_expr, value_expr [, namespace_string ] )
- [INSERTCHILDXMLAFTER]
INSERTCHILDXMLAFTER
( XMLType_instance, XPath_string, child_expr, value_expr [, namespace_string ] )
- [INSERTCHILDXMLBEFORE]
INSERTCHILDXMLBEFORE
( XMLType_instance, XPath_string, child_expr, value_expr [, namespace_string ] )
- [INSERTXMLAFTER]
INSERTXMLAFTER
( XMLType_instance, XPath_string, value_expr [, namespace_string ] )
- [INSERTXMLBEFORE]
INSERTXMLBEFORE
( XMLType_instance, XPath_string, value_expr [, namespace_string ] )
- [INSTR]
{ INSTR
| INSTRB
| INSTRC
| INSTR2
| INSTR4
}
(string , substring [, position [, occurrence ] ])
- [ITERATION_NUMBER]
ITERATION_NUMBER
- [LAG]
LAG
{ ( value_expr [, offset [, default]]) [ { RESPECT | IGNORE } NULLS ]
| ( value_expr [ { RESPECT | IGNORE } NULLS ] [, offset [, default]] )
}
OVER ([ query_partition_clause ] order_by_clause)
- [LAST]
aggregate_function KEEP
(DENSE_RANK LAST ORDER BY
expr [ DESC | ASC ]
[ NULLS { FIRST | LAST } ]
[, expr [ DESC | ASC ]
[ NULLS { FIRST | LAST } ]
]...
)
[ OVER ( [query_partition_clause] ) ]
- [LAST_DAY]
LAST_DAY(date)
- [LAST_VALUE]
LAST_VALUE
{ (expr) [ { RESPECT | IGNORE } NULLS ]
| (expr [ { RESPECT | IGNORE } NULLS ])
OVER (analytic_clause)
- [LEAD]
LEAD
{ ( value_expr [, offset [, default]] ) [ { RESPECT | IGNORE } NULLS ]
| ( value_expr [ { RESPECT | IGNORE } NULLS ] [, offset [, default]] )
}
OVER ([ query_partition_clause ] order_by_clause)
- [LEAST]
LEAST(expr [, expr ]...)
- [LENGTH]
{ LENGTH
| LENGTHB
| LENGTHC
| LENGTH2
| LENGTH4
}
(char)
- [LISTAGG]
join column by groups, aggregate group by
select grid as person_id,
listagg(jgid,',') within group( order by jgid ) as hospitals
from rule_final t1 group by t1.grdi
-- If you are using ClickHouse, just do
SELECT id, groupArray(object) FROM Table GROUP BY id
LISTAGG(measure_expr [, 'delimiter'])
WITHIN GROUP (order_by_clause) [OVER query_partition_clause]
- [LN]
LN(n)
- [LNNVL]
LNNVL(condition)
- [LOCALTIMESTAMP]
LOCALTIMESTAMP [ (timestamp_precision) ]
- [LOG]
LOG(n2, n1)
- [LOWER]
LOWER(char)
- [LPAD]
LPAD(expr1, n [, expr2 ])
- [LTRIM]
LTRIM(char [, set ])
- [MAKE_REF]
MAKE_REF({ table | view } , key [, key ]...)
- [MAX]
-- select the max count value
select val, title
from (select val, title, count(*) cnt, max(count(*)) over (partition by val) max_count
from a1
group by val, title)
where cnt = max_count;
MAX([ DISTINCT | ALL ] expr) [ OVER (analytic_clause) ]
- [MEDIAN]
MEDIAN(expr) [ OVER (query_partition_clause) ]
- [MIN]
MIN([ DISTINCT | ALL ] expr) [ OVER (analytic_clause) ]
- [MOD]
MOD(n2, n1)
- [MONTHS_BETWEEN]
MONTHS_BETWEEN(date1, date2)
- [NANVL]
NANVL(n2, n1)
- [NCHR]
NCHR(number)
- [NEW_TIME]
NEW_TIME(date, timezone1, timezone2)
- [NEXT_DAY]
NEXT_DAY(date, char)
- [NLS_CHARSET_DECL_LEN]
NLS_CHARSET_DECL_LEN(byte_count, char_set_id)
- [NLS_CHARSET_ID]
NLS_CHARSET_ID(string)
- [NLS_CHARSET_NAME]
NLS_CHARSET_NAME(number)
- [NLS_INITCAP]
NLS_INITCAP(char [, 'nlsparam' ])
- [NLS_LOWER]
NLS_LOWER(char [, 'nlsparam' ])
- [NLS_UPPER]
NLS_UPPER(char [, 'nlsparam' ])
- [NLSSORT]
NLSSORT(char [, 'nlsparam' ])
- [NTH_VALUE]
NTH_VALUE(measure_expr, n)
[ FROM { FIRST | LAST } ][ { RESPECT | IGNORE } NULLS ]
OVER (analytic_clause)
- [NTILE]
NTILE(expr) OVER ([ query_partition_clause ] order_by_clause)
- [NULLIF]
NULLIF(expr1, expr2)
- [NUMTODSINTERVAL]
NUMTODSINTERVAL(n, 'interval_unit')
- [NUMTOYMINTERVAL]
NUMTOYMINTERVAL(n, 'interval_unit')
- [NVL]
NVL(expr1, expr2)
- [NVL2]
NVL2(expr1, expr2, expr3)
- [ORA_DST_AFFECTED]
ORA_DST_AFFECTED(datetime_expr)
- [ORA_DST_CONVERT]
ORA_DST_CONVERT(datetime_expr [, integer [, integer ]])
- [ORA_DST_ERROR]
ORA_DST_ERROR(datetime_expr)
- [ORA_HASH]
ORA_HASH(expr [, max_bucket [, seed_value ] ])
- [PATH]
PATH(correlation_integer)
- [PERCENT_RANK (aggregate)]
PERCENT_RANK(expr [, expr ]...) WITHIN GROUP
(ORDER BY
expr [ DESC | ASC ]
[NULLS { FIRST | LAST } ]
[, expr [ DESC | ASC ]
[NULLS { FIRST | LAST } ]
]...
)
- [PERCENT_RANK (analytic)]
PERCENT_RANK( ) OVER ([ query_partition_clause ] order_by_clause)
- [PERCENTILE_CONT]
PERCENTILE_CONT(expr) WITHIN GROUP
(ORDER BY expr [ DESC | ASC ])
[ OVER (query_partition_clause) ]
- [PERCENTILE_DISC]
PERCENTILE_DISC(expr) WITHIN GROUP
(ORDER BY expr [ DESC | ASC ])
[ OVER (query_partition_clause) ]
- [POWER]
POWER(n2, n1)
- [POWERMULTISET]
POWERMULTISET(expr)
- [POWERMULTISET_BY_CARDINALITY]
POWERMULTISET_BY_CARDINALITY(expr, cardinality)
- [PREDICTION]
PREDICTION ( [ schema . ] model [ cost_matrix_clause ] mining_attribute_clause )
- [PREDICTION_BOUNDS]
PREDICTION_BOUNDS
( [schema.] model
[, confidence_level [, class_value]]
mining_attribute_clause
)
- [PREDICTION_COST]
PREDICTION_COST ( [ schema . ] model [ , class ] cost_matrix_clause
mining_attribute_clause )
- [PREDICTION_DETAILS]
PREDICTION_DETAILS ( [ schema . ] model mining_attribute_clause )
- [PREDICTION_PROBABILITY]
PREDICTION_PROBABILITY ( [ schema . ] model [ , class ]
mining_attribute_clause )
- [PREDICTION_SET]
PREDICTION_SET ( [ schema . ] model [ , bestN [ , cutoff ] ]
[ cost_matrix_clause ] mining_attribute_clause )
- [PRESENTNNV]
PRESENTNNV(cell_reference, expr1, expr2)
- [PRESENTV]
PRESENTV(cell_reference, expr1, expr2)
- [PREVIOUS]
PREVIOUS(cell_reference)
- [RANK (aggregate)]
RANK(expr [, expr ]...) WITHIN GROUP
(ORDER BY
expr [ DESC | ASC ]
[ NULLS { FIRST | LAST } ]
[, expr [ DESC | ASC ]
[ NULLS { FIRST | LAST } ]
]...
)
- [RANK (analytic)]
RANK( )
OVER ([ query_partition_clause ] order_by_clause)
- [RATIO_TO_REPORT]
RATIO_TO_REPORT(expr)
OVER ([ query_partition_clause ])
- [RAWTOHEX]
RAWTOHEX(raw)
- [RAWTONHEX]
RAWTONHEX(raw)
- [REF]
REF (correlation_variable)
- [REFTOHEX]
REFTOHEX (expr)
- [REGEXP_COUNT]
REGEXP_COUNT (source_char, pattern [, position [, match_param]])
- [REGEXP_INSTR]
REGEXP_INSTR (source_char, pattern
[, position
[, occurrence
[, return_opt
[, match_param
[, subexpr]
]
]
]
]
- [REGEXP_REPLACE]
REGEXP_REPLACE(source_char, pattern
[, replace_string
[, position
[, occurrence
[, match_param ]
]
]
]
)
- [REGEXP_SUBSTR]
REGEXP_SUBSTR(source_char, pattern
[, position
[, occurrence
[, match_param
[, subexpr
]
]
]
]
)
- [REGR_AVGX], <a id=”sthref568”></a><a id=”sthref569”></a>[REGR_AVGY], <a id=”sthref570”></a><a id=”sthref571”></a>[REGR_COUNT], <a id=”sthref572”></a><a id=”sthref573”></a>[REGR_INTERCEPT], <a id=”sthref574”></a><a id=”sthref575”></a>[REGR_R2], <a id=”sthref576”></a><a id=”sthref577”></a>[REGR_SLOPE], <a id=”sthref578”></a><a id=”sthref579”></a>[REGR_SXX], <a id=”sthref580”></a><a id=”sthref581”></a>[REGR_SXY], <a id=”sthref582”></a><a id=”sthref583”></a>[REGR_SYY]
{ REGR_SLOPE
| REGR_INTERCEPT
| REGR_COUNT
| REGR_R2
| REGR_AVGX
| REGR_AVGY
| REGR_SXX
| REGR_SYY
| REGR_SXY
}
(expr1 , expr2)
[ OVER (analytic_clause) ]
- [REMAINDER]
REMAINDER(n2, n1)
- [REPLACE]
REPLACE(char, search_string
[, replacement_string ]
)
- [ROUND (date)]
ROUND(date [, fmt ])
- [ROUND (number)]
ROUND(n [, integer ])
- [ROW_NUMBER]
ROW_NUMBER( )
OVER ([ query_partition_clause ] order_by_clause)
- [ROWIDTOCHAR]
ROWIDTOCHAR(rowid)
- [ROWIDTONCHAR]
ROWIDTONCHAR(rowid)
- [RPAD]
RPAD(expr1 , n [, expr2 ])
- [RTRIM]
RTRIM(char [, set ])
- [SCN_TO_TIMESTAMP]
SCN_TO_TIMESTAMP(number)
- [SESSIONTIMEZONE]
SESSIONTIMEZONE
- [SET]
SET (nested_table)
- [SIGN]
SIGN(n)
- [SIN]
SIN(n)
- [SINH]
SINH(n)
- [SOUNDEX]
SOUNDEX(char)
- [SQRT]
SQRT(n)
- [STATS_BINOMIAL_TEST]
STATS_BINOMIAL_TEST(expr1, expr2, p
[, { TWO_SIDED_PROB
| EXACT_PROB
| ONE_SIDED_PROB_OR_MORE
| ONE_SIDED_PROB_OR_LESS
}
]
)
- [STATS_CROSSTAB]
STATS_CROSSTAB(expr1, expr2
[, { CHISQ_OBS
| CHISQ_SIG
| CHISQ_DF
| PHI_COEFFICIENT
| CRAMERS_V
| CONT_COEFFICIENT
| COHENS_K
}
]
)
- [STATS_F_TEST]
STATS_F_TEST(expr1, expr2
[, { { STATISTIC
| DF_NUM
| DF_DEN
| ONE_SIDED_SIG
} , expr3
| TWO_SIDED_SIG
}
]
)
- [STATS_KS_TEST]
STATS_KS_TEST(expr1, expr2
[, { STATISTIC | SIG } ]
)
- [STATS_MODE]
STATS_MODE(expr)
- [STATS_MW_TEST]
STATS_MW_TEST(expr1, expr2
[, { STATISTIC
| U_STATISTIC
| ONE_SIDED_SIG , expr3
| TWO_SIDED_SIG
}
]
)
- [STATS_ONE_WAY_ANOVA]
STATS_ONE_WAY_ANOVA(expr1, expr2
[, { SUM_SQUARES_BETWEEN
| SUM_SQUARES_WITHIN
| DF_BETWEEN
| DF_WITHIN
| MEAN_SQUARES_BETWEEN
| MEAN_SQUARES_WITHIN
| F_RATIO
| SIG
}
]
)
- [STATS_T_TEST_INDEP], <a id=”sthref634”></a><a id=”sthref635”></a>[STATS_T_TEST_INDEPU], <a id=”sthref636”></a><a id=”sthref637”></a>[STATS_T_TEST_ONE], <a id=”sthref638”></a><a id=”sthref639”></a>[STATS_T_TEST_PAIRED]
{
STATS_T_TEST_ONE ( expr1 [, expr2 ]
|
{ { STATS_T_TEST_PAIRED
| STATS_T_TEST_INDEP
| STATS_T_TEST_INDEPU
} ( expr1, expr2
}
}
[, { { STATISTIC | ONE_SIDED_SIG } , expr3 | TWO_SIDED_SIG | DF } ] )
- [STATS_WSR_TEST]
STATS_WSR_TEST(expr1, expr2
[, { STATISTIC
| ONE_SIDED_SIG
| TWO_SIDED_SIG
}
]
)
- [STDDEV]
STDDEV([ DISTINCT | ALL ] expr)
[ OVER (analytic_clause) ]
- [STDDEV_POP]
STDDEV_POP(expr)
[ OVER (analytic_clause) ]
- [STDDEV_SAMP]
STDDEV_SAMP(expr)
[ OVER (analytic_clause) ]
- [SUBSTR]
{ SUBSTR
| SUBSTRB
| SUBSTRC
| SUBSTR2
| SUBSTR4
}
(char, position [, substring_length ])
- [SUM]
SUM([ DISTINCT | ALL ] expr)
[ OVER (analytic_clause) ]
- [SYS_CONNECT_BY_PATH]
SYS_CONNECT_BY_PATH(column, char)
- [SYS_CONTEXT]
SYS_CONTEXT('namespace', 'parameter' [, length ])
- [SYS_DBURIGEN]
SYS_DBURIGEN({ column | attribute }
[ rowid ]
[, { column | attribute }
[ rowid ]
]...
[, 'text ( )' ]
)
- [SYS_EXTRACT_UTC]
SYS_EXTRACT_UTC(datetime_with_timezone)
- [SYS_GUID]
SYS_GUID( )
- [SYS_TYPEID]
SYS_TYPEID(object_type_value)
- [SYS_XMLAGG]
SYS_XMLAGG(expr [, fmt ])
- [SYS_XMLGEN]
SYS_XMLGEN(expr [, fmt ])
- [SYSDATE]
SYSDATE
- [SYSTIMESTAMP]
SYSTIMESTAMP
- [TAN]
TAN(n)
- [TANH]
TANH(n)
- [TIMESTAMP_TO_SCN]
TIMESTAMP_TO_SCN(timestamp)
- [TO_BINARY_DOUBLE]
TO_BINARY_DOUBLE(expr [, fmt [, 'nlsparam' ] ])
- [TO_BINARY_FLOAT]
TO_BINARY_FLOAT(expr [, fmt [, 'nlsparam' ] ])
- [TO_BLOB]
TO_BLOB ( raw_value )
- [TO_CHAR (character)]
TO_CHAR(nchar | clob | nclob)
- [TO_CHAR (datetime)]
TO_CHAR({ datetime | interval } [, fmt [, 'nlsparam' ] ])
- [TO_CHAR (number)]
TO_CHAR(n [, fmt [, 'nlsparam' ] ])
- [TO_CLOB]
TO_CLOB(lob_column | char)
- [TO_DATE]
TO_DATE(char [, fmt [, 'nlsparam' ] ])
to_date(char, "YYYYMMDDHHMISS")
- [TO_DSINTERVAL]
TO_DSINTERVAL ( ' { sql_format | ds_iso_format } ' )
- [TO_LOB]
TO_LOB(long_column)
- [TO_MULTI_BYTE]
TO_MULTI_BYTE(char)
- [TO_NCHAR (character)]
TO_NCHAR({char | clob | nclob})
- [TO_NCHAR (datetime)]
TO_NCHAR({ datetime | interval }
[, fmt [, 'nlsparam' ] ]
)
- [TO_NCHAR (number)]
TO_NCHAR(n [, fmt [, 'nlsparam' ] ])
- [TO_NCLOB]
TO_NCLOB(lob_column | char)
- [TO_NUMBER]
TO_NUMBER(expr [, fmt [, 'nlsparam' ] ])
- [TO_SINGLE_BYTE]
TO_SINGLE_BYTE(char)
- [TO_TIMESTAMP]
TO_TIMESTAMP(char [, fmt [, 'nlsparam' ] ])
- [TO_TIMESTAMP_TZ]
TO_TIMESTAMP_TZ(char [, fmt [, 'nlsparam' ] ])
- [TO_YMINTERVAL]
TO_YMINTERVAL
( ' { [+|-] years - months
| ym_iso_format
} ' )
- [TRANSLATE]
TRANSLATE(expr, from_string, to_string)
- [TRANSLATE … USING]
TRANSLATE ( char USING
{ CHAR_CS | NCHAR_CS }
)
- [TREAT]
TREAT(expr AS [ REF ] [ schema. ]type)
- [TRIM]
TRIM([ { { LEADING | TRAILING | BOTH }
[ trim_character ]
| trim_character
}
FROM
]
trim_source
)
- [TRUNC (date)]
TRUNC(date [, fmt ])
- [TRUNC (number)]
TRUNC(n1 [, n2 ])
- [TZ_OFFSET]
TZ_OFFSET({ 'time_zone_name'
| '{ + | - } hh : mi'
| SESSIONTIMEZONE
| DBTIMEZONE
}
)
- [UID]
UID
- [UNISTR]
UNISTR( string )
- [UPDATEXML]
UPDATEXML
(XMLType_instance,
XPath_string, value_expr
[, XPath_string, value_expr ]...
[, namespace_string ]
)
- [UPPER]
UPPER(char)
- [USER]
USER
- [user-defined function]
[ schema. ]
{ [ package. ]function | user_defined_operator }
[ @ dblink. ]
[ ( [ [ DISTINCT | ALL ] expr [, expr ]... ] ) ]
- [USERENV]
USERENV('parameter')
- [VALUE]
VALUE(correlation_variable)
- [VAR_POP]
VAR_POP(expr) [ OVER (analytic_clause) ]
- [VAR_SAMP]
VAR_SAMP(expr) [ OVER (analytic_clause) ]
- [VARIANCE]
VARIANCE([ DISTINCT | ALL ] expr)
[ OVER (analytic_clause) ]
- [VSIZE]
VSIZE(expr)
- [WIDTH_BUCKET]
WIDTH_BUCKET
(expr, min_value, max_value, num_buckets)
- [XMLAGG]
XMLAGG(XMLType_instance [ order_by_clause ])
<a id=”SQLQR476”></a>[XMLCAST]<a id=”sthref760”></a><a id=”sthref761”></a>
XMLCAST ( value_expression AS datatype )
- [XMLCDATA]
XMLCDATA ( value_expr )
- [XMLCOLATTVAL]
XMLCOLATTVAL
(value_expr [ AS { c_alias | EVALNAME value_expr } ]
[, value_expr [ AS { c_alias | EVALNAME value_expr } ]
]...
)
- [XMLCOMMENT]
XMLCOMMENT ( value_expr )
- [XMLCONCAT]
XMLCONCAT(XMLType_instance [, XMLType_instance ]...)
- [XMLDIFF]
XMLDIFF ( XMLType_document, XMLType_document [ , integer, string ] )
- [XMLELEMENT]
XMLELEMENT
( [ ENTITYESCAPING | NOENTITYESCAPING ]
[ NAME ]
{ identifier
| EVALNAME value_expr
}
[, XML_attributes_clause ]
[, value_expr [ [AS] c_alias ]]...
)
- [XMLEXISTS]
XMLEXISTS ( XQuery_string [ XML_passing_clause ] )
- [XMLFOREST]
XMLFOREST
( value_expr [ AS { c_alias | EVALNAME value_expr } ]
[, value_expr [ AS { c_alias | EVALNAME value_expr } ]
]...
)
- [XMLISVALID]
XMLISVALID ( XMLType_instance [, XMLSchema_URL [, element ]] )
- [XMLPARSE]
XMLPARSE
({ DOCUMENT | CONTENT } value_expr [ WELLFORMED ]
)
- [XMLPATCH]
XMLPATCH ( XMLType_document, XMLType_document )
- [XMLPI]
XMLPI
( { [ NAME ] identifier
| EVALNAME value_expr
} [, value_expr ]
)
- [XMLQUERY]
XMLQUERY
( XQuery_string
[ XML_passing_clause ]
RETURNING CONTENT [NULL ON EMPTY]
)
- [XMLROOT]
XMLROOT
( value_expr, VERSION
{ value_expr | NO VALUE }
[, STANDALONE { YES | NO | NO VALUE } ]
)
- [XMLSEQUENCE]
XMLSEQUENCE( XMLType_instance
| sys_refcursor_instance [, fmt ]
)
- [XMLSERIALIZE]
XMLSERIALIZE
( { DOCUMENT | CONTENT } value_expr [ AS datatype ]
[ ENCODING xml_encoding_spec ]
[ VERSION string_literal ]
[ NO INDENT | { INDENT [SIZE = number] } ]
[ { HIDE | SHOW } DEFAULTS ]
)
- [XMLTABLE]
XMLTABLE
(
[ XMLnamespaces_clause , ] XQuery_string XMLTABLE_options
)
- [XMLTRANSFORM]
XMLTRANSFORM(XMLType_instance, { XMLType_instance
| string
}
)
An expression is a combination of one or more values, operators, and SQL functions that evaluate to a value. An expression generally assumes the data type of its components.
Expressions have several forms. The sections that follow show the syntax for each form of expression. Refer to [Chapter 5, “Subclauses”](sqlqr05.htm#g1004143) for the syntax of the subclauses.
- [CASE expressions]
classification of a column:
select distinct grid,
hznl,
case
when hznl between 0 and 6 then
'童年'
when hznl between 7 and 17 then
'少年'
when hznl between 18 and 40 then
'青年'
when hznl between 41 and 65 then
'中年'
when hznl >= 66 then
'老年'
end as 年龄段
from rule_final
CASE { simple_case_expression
| searched_case_expression
}
[ else_clause ]
END
- [Column expressions]
A column expression can be a simple expression, compound expression, function expression, or expression list, containing only columns of the subject table, constants, and deterministic functions.
- [Compound expressions]
{ (expr)
| { + | - | PRIOR } expr
| expr { * | / | + | - | || } expr
}
Note: The double vertical bars are part of the syntax
(indicating concatenation) rather than BNF notation.
- [CURSOR expressions]
CURSOR (subquery)
- [Datetime expressions]
expr AT
{ LOCAL
| TIME ZONE { ' [ + | - ] hh:mi'
| DBTIMEZONE
| 'time_zone_name'
| expr
}
}
- [Function expressions]
You can use any built-in SQL function or user-defined function as an expression.
- [Interval expressions]
( expr1 - expr2 )
{ DAY [ (leading_field_precision) ] TO
SECOND [ (fractional_second_precision) ]
| YEAR [ (leading_field_precision) ] TO
MONTH
}
- [Model expressions]
{ measure_column <span class="bold">[</span> { condition | expr } [, { condition | expr } ]... <span class="bold">]</span>
| aggregate_function
{ <span class="bold">[</span> { condition | expr } [, { condition | expr } ]... <span class="bold">]</span>
| <span class="bold">[</span> single_column_for_loop [, single_column_for_loop ]... <span class="bold">]</span>
| <span class="bold">[</span> multi_column_for_loop <span class="bold">]</span>
}
| analytic_function
}
- [Object access expressions]
{ table_alias.column.
| object_table_alias.
| (expr).
}
{ attribute [.attribute ]...
[.method ([ argument [, argument ]... ]) ]
| method ([ argument [, argument ]... ])
}
- [Placeholder expressions]
:host_variable
[ [ INDICATOR ]
:indicator_variable
]
- [Scalar subquery expressions]
A scalar subquery expression is a subquery that returns exactly one column value from one row.
- [Simple expressions]
{ [ query_name.
| [schema.]
{ table. | view. | materialized view. }
] { column | ROWID }
| ROWNUM
| string
| number
| sequence. { CURRVAL | NEXTVAL }
| NULL
}
- [Type constructor expressions]
[ NEW ] [ schema. ]type_name
([ expr [, expr ]... ])
A condition specifies a combination of one or more expressions and logical (Boolean) operators and returns a value of `TRUE`, `FALSE`, or unknown.
Conditions have several forms. The sections that follow show the syntax for each form of condition. Refer to [Chapter 5, “Subclauses”](sqlqr05.htm#g1004143) for the syntax of the subclauses.
- [BETWEEN condition]
expr1 [ NOT ] BETWEEN expr2 AND expr3
- [Compound conditions]
{ (condition)
| NOT condition
| condition { AND | OR } condition
}
- [EQUALS_PATH condition]
EQUALS_PATH
(column, path_string [, correlation_integer ])
- [EXISTS condition]
EXISTS (subquery)
- [Floating-point conditions]
expr IS [ NOT ] { NAN | INFINITE }
- [Group comparison conditions]
{ expr
{ = | != | ^= | <> | > | < | >= | <= }
{ ANY | SOME | ALL }
({ expression_list | subquery })
| ( expr [, expr ]... )
{ = | != | ^= | <> }
{ ANY | SOME | ALL }
({ expression_list
[, expression_list ]...
| subquery
}
)
}
where `!=`, `^=`, and `<>` test for inequality
- [IN condition]
{ expr [ NOT ] IN ({ expression_list | subquery })
| ( expr [, expr ]... )
[ NOT ] IN ({ expression_list [, expression_list ]...
| subquery
}
)
}
- [IS A SET condition]
nested_table IS [ NOT ] A SET
- [IS ANY condition]
[ dimension_column IS ] ANY
- [IS EMPTY condition]
nested_table IS [ NOT ] EMPTY
- [IS OF <span class=”italic”>type</span> condition]
expr IS [ NOT ] OF [ TYPE ]
([ ONLY ] [ schema. ] type
[, [ ONLY ] [ schema. ] type ]...
)
- [IS PRESENT condition]
cell_reference IS PRESENT
- [LIKE condition]
char1 [ NOT ] { LIKE | LIKEC | LIKE2 | LIKE4 }
char2 [ ESCAPE esc_char ]
-- If you really want to search for actual percent sign or underscore characters with the LIKE operator, you need to suppress the special meaning of those characters. You can do this with the ESCAPE option of the LIKE operator
SQL> select empno, begindate, comments
2 from history
3 where comments like '%0\%%' escape '\';
EMPNO BEGINDATE COMMENTS
-------- ----------- ----------------------------------------------------
7566 01-JUN-1989 From accounting to human resources; 0% salary change
7788 15-APR-1985 Transfer to human resources; 0% salary raise
- [Logical conditions]
{ NOT | AND | OR }
- [MEMBER condition]
expr [ NOT ] MEMBER [ OF ] nested_table
- [Null conditions]
expr IS [ NOT ] NULL
- [REGEXP_LIKE condition]
REGEXP_LIKE(source_char, pattern
[, match_param ]
)
- [Simple comparison conditions]
{ expr
{ = | != | ^= | <> | > | < | >= | <= }
expr
| (expr [, expr ]...)
{ = | != | ^= | <> }
( expression_list | subquery )
}
where `!=`, `^=`, and `<>` test for inequality
- [SUBMULTISET condition]
nested_table1
[ NOT ] SUBMULTISET [ OF ]
nested_table2
- [UNDER_PATH condition]
UNDER_PATH (column [, levels ], path_string
[, correlation_integer ]
)
A data type is a classification of a particular type of information or data. Each value manipulated by Oracle has a data type. The data type of a value associates a fixed set of properties with the value. These properties cause Oracle to treat values of one data type differently from values of another.
The data types recognized by Oracle are:
- ANSI-supported data types
{ CHARACTER [VARYING] (size)
| { CHAR | NCHAR } VARYING (size)
| VARCHAR (size)
| NATIONAL { CHARACTER | CHAR }
[VARYING] (size)
| { NUMERIC | DECIMAL | DEC }
[ (precision [, scale ]) ]
| { INTEGER | INT | SMALLINT }
| FLOAT [ (size) ]
| DOUBLE PRECISION
| REAL
}
- Oracle built-in data types
{ character_datatypes
| number_datatypes
| long_and_raw_datatypes
| datetime_datatypes
| large_object_datatypes
| rowid_datatypes
}
- Oracle-supplied data types
{ any_types
| XML_types
| spatial_types
| media_types
| expression_filter_type
}
- User-defined data types
User-defined data types use Oracle built-in data types and other user-defined data types to model the structure and behavior of data in applications.
This section describes the kinds of Oracle built-in data types.
- character_datatypes
{ CHAR [ (size [ BYTE | CHAR ]) ]
| VARCHAR2 (size [ BYTE | CHAR ])
| NCHAR [ (size) ]
| NVARCHAR2 (size)
}
- datetime_datatypes
{ DATE
| TIMESTAMP [ (fractional_seconds_precision) ]
[ WITH [ LOCAL ] TIME ZONE ]
| INTERVAL YEAR [ (year_precision) ] TO MONTH
| INTERVAL DAY [ (day_precision) ] TO SECOND
[ (fractional_seconds_precision) ]
}
- date range
select min(RYRQ) from ck10_ghdj
where RYRQ between date '2010-01-01' and date '2020-01-01'
- large_object_datatypes
{ BLOB | CLOB | NCLOB | BFILE }
- long_and_raw_datatypes
{ LONG | LONG RAW | RAW (size) }
- number_datatypes
{ NUMBER [ (precision [, scale ]) ]
| FLOAT [ (precision) ]
| BINARY_FLOAT
| BINARY_DOUBLE
}
- rowid_datatypes
{ ROWID | UROWID [ (size) ] }
The codes listed for the data types are used internally by Oracle Database. The data type code of a column or object attribute is returned by the DUMP function.
- find table name from a column name:
select table_name, column_name from dba_tab_columns where column_name like '%THE_COLUMN_YOU_LOOK_FOR(CAPITAL)%';
- change language from unknown to Chinese:
create windows environment system variable NLS_LANG AMERICAN_AMERICA.AL32UTF8 或 SIMPLIFIED CHINESE_CHINA.ZHS16GBK
- quantile, percentile:
SELECT
f.title,
ROUND(100.0 * (SELECT COUNT(*) FROM film AS f2 WHERE f2.length <= f.length) / totals.film_count, 1) AS percentile
FROM film f
CROSS JOIN (
SELECT COUNT(*) AS film_count
FROM film
) AS totals
ORDER BY percentile DESC;
- show tables and columns:
SELECT
*
FROM
pg_catalog.pg_tables
WHERE
schemaname != 'pg_catalog'
AND schemaname != 'information_schema';
select table_schema,
table_name,
ordinal_position as position,
column_name,
data_type,
case when character_maximum_length is not null
then character_maximum_length
else numeric_precision end as max_length,
is_nullable,
column_default as default_value
from information_schema.columns
where table_schema not in ('information_schema', 'pg_catalog')
order by table_schema,
table_name,
ordinal_position;