1. Single-statement var declaration and initialization
DECLARE @MyVar INT = 10, @YourVar VARCHAR(10) = 'blabla';
SELECT @MyVar, @YourVar
2. C-type assignment ops
DECLARE @X INT = 10, @Y INT = 11;
SELECT @X += 5;
SELECT @X;
SELECT @X &= @Y
SELECT @X;
SELECT @X ^= @Y
SELECT @X;
These operators inlcude:
mathematical operation
+=, -=, *=, /=, %=
Bitwise operation
^=, |=, &=
3. INSERT multiple records
CREATE TABLE #T (Name VARCHAR(10));
INSERT INTO #T
VALUES ('Kevin'),('Jack'),('George')
SELECT * FROM #T
DROP TABLE #T
4. Merge statement - to implment "upsert" logic
CREATE TABLE #T1 (ID INT, Name VARCHAR(10));
CREATE TABLE #T2 (ID INT, Name VARCHAR(10));
INSERT INTO #T1 VALUES (1, 'Kevin'), (2, 'Jack') , (3, 'George');
INSERT INTO #T2 VALUES (1, 'John'), (3, 'Jack') , (4, 'Steven');
MERGE #T1 AS t USING #T2 AS s
ON (t.ID = s.ID)
WHEN MATCHED THEN
UPDATE SET NAME = s.NAME
WHEN NOT MATCHED THEN
INSERT (ID, Name) VALUES (s.ID, s.Name);
SELECT * FROM #T1;
DROP TABLE #T1;
DROP TABLE #T2;
In MERGE statement, there are three types of matches:
WHEN MATCHED [ AND other_conditions]
WHEN NOT MATCHED [ BY TARGET ] [ AND other_conditions]
WHEN NOT MATCHED BY SOURCE [ AND other_conditions]
For the unmatched, the default case is "BY TARGET", otherwise we have to explicitly specify BY SOURCE. We know LEFT OUTER JOIN and RIGHT OUTER JOIN, so we understand what that is talking about.
However with optional extra conditions affixed to them, we are free to define as many cases as we need. In case there are inclusive cases, we need take attention to handled them in order. e.g.,
WHEN MATCHED AND s.Name='George' THEN ...
WHEN MATCHED THEN ...
The cases are processed in a sequential order - the result of prior case is the context of the latter case, and so forth.
When a condition falls in any matching case, there are three type of special operations, INSERT, UPDATE or DELETE, - by special, I mean these statments inherit MERGE statement context that it won't have when not within a MERGE statement. E.g., just "DELETE" is enough to mean delete from the target table the current record, we don't need to specify the table and the condition criteria explicitely.
Of course we can add any general statements in any of the matching blocks.
Showing posts with label SQL. Show all posts
Showing posts with label SQL. Show all posts
Friday, October 1, 2010
Wednesday, August 25, 2010
Pagination with Sql Query Result
When query database, you don't want to get all the result set of records, because of possible huge number. Users have to see them one page at a time anyways. What you do is providing page number and page size as parameters and get the right record set for viewing. In another case with what are called top-N, bottom-N and inner-N reporting, you also need winowing out certain range of records. To accomplish this, you can work out with different brand dbs by using slightly different clauses.
MySQL
With MySql, LIMIT clause provide the simplest solution. As shown in following example:
-- Get record from 501 to 600
SELECT * FROM EMPLOYEES WHERE id >10000 ORDER BY last_name LIMIT 100 OFFSET 500
SQL Server
With SQL Server, TOP is useful to only get certain records listed in the front:
SELECT TOP 20 * FROM EMPLOYEES WHERE ID>10000 ORDER BY LAST_NAME
However, it dosen't help if you want retrieve a page other than the first. What SQL Server has to offer for this problem is the ROW_NUMBER(), a system function for generating analytic data. Here is an example:
SELECT * FROM (
SELECT FIRST_NAME, EMAIL, ROW_NUMBER() OVER (ORDER BY LAST_NAME DESC) AS RN
FROM EMPLOYEES) WHERE RN BETWEEN 501 AND 600
) ORDER BY RN
Note, the inner query result is ordered by the order defined in ROW_NUMBER function. In case you have more than one ROW_NUMBER() functions in a query, as demonstrated in the following code snippet, the result is ordered based on the last function.
select * from (
select name, email, ROW_NUMBER() over (order by name desc) as rn,
ROW_NUMBER() over (order by name asc) as rn2
from contact) result where rn between 7 and 10
order by rn desc
I thought this solution is not a clean cut to pagination, because you end up getting a nested query and have to refactor the existing query to make it run. For example, you already have a query, now you just want to add pagination. Even SQL allows * and aggregation functions together as shown in following code snippet, that allows you to just add ROW_NUMBER() column without requiring to change much on the SELECT clause, you have to remove the ORDER BY clause from the original place and put it into ROW_NUMBER function's OVER clause or put into the outer query statement, because normally ORDER BY clause is not allowed in subquery.
select *, ROW_NUMBER() over (order by name desc) as rn from contact
Oracle
Don't mention that Oracle offers ROW_NUMBER() analytic function as with SQL SVR. But Oracle has a psuedo column ROWNUM that comes with any query. I believe it is most efficient way and can cover most cases of query pagination. Here is an example:
SELECT * FROM EMPLOYEES WHERE ROWNUM>50 AND ROWNUM<101 ORDER BY LAST_NAME
Most importantly, to add pagination to existing query you only need to add a condition on ROWNUM to the WHERE clause, no more further efforts.
More on ROW_NUMBER()
Surely with ROW_NUMBER() in SQL SVR and Oracle, you can do much more than pagination. Following example can give a heads up, it gets first five employees from all departments they have top smallest id in their departments:
SELECT * FROM
(
SELECT DEPARTMENT_NAME, LAST_NAME, EMPLOYEE_ID, ROW_NUMER()
OVER (PARTITION BY DEPARTMENT_NAME ORDER BY EMPLOYEE_ID) AS RN_ID
FROM EMPOYEES
)
WHERE RN_ID <6
Pagination with LINQ to SQL
To end this topic, I would like to have a look at LINQ to SQL. You must know it has skip(m) and take(n) feature (following code snippet). But, what it ends up translated into SQL?
var query = (from employee in Employees
select employee).Skip(10).Take(20);
Under the hood it is translated into something like this:
SELECT FIRST_NAME, LAST_NAME, EMAIL
FROM (
SELECT ROW_NUMBER() OVER (
ORDER BY FIRST_NAME, LAST_NAME, EMAIL) ROW_NUMBER,
FIRST_NAME,
LAST_NAME,
EMAIL
FROM EMPLOYEES
)
WHERE ROW_NUMBER BETWEEN @P0+1 AND @P0+@P1
ORDER BY ROW_NUMBER
Where @p0 is the value specified within Skip(), while @p1 is that within Take().
MySQL
With MySql, LIMIT clause provide the simplest solution. As shown in following example:
-- Get record from 501 to 600
SELECT * FROM EMPLOYEES WHERE id >10000 ORDER BY last_name LIMIT 100 OFFSET 500
SQL Server
With SQL Server, TOP is useful to only get certain records listed in the front:
SELECT TOP 20 * FROM EMPLOYEES WHERE ID>10000 ORDER BY LAST_NAME
However, it dosen't help if you want retrieve a page other than the first. What SQL Server has to offer for this problem is the ROW_NUMBER(), a system function for generating analytic data. Here is an example:
SELECT * FROM (
SELECT FIRST_NAME, EMAIL, ROW_NUMBER() OVER (ORDER BY LAST_NAME DESC) AS RN
FROM EMPLOYEES) WHERE RN BETWEEN 501 AND 600
) ORDER BY RN
Note, the inner query result is ordered by the order defined in ROW_NUMBER function. In case you have more than one ROW_NUMBER() functions in a query, as demonstrated in the following code snippet, the result is ordered based on the last function.
select * from (
select name, email, ROW_NUMBER() over (order by name desc) as rn,
ROW_NUMBER() over (order by name asc) as rn2
from contact) result where rn between 7 and 10
order by rn desc
I thought this solution is not a clean cut to pagination, because you end up getting a nested query and have to refactor the existing query to make it run. For example, you already have a query, now you just want to add pagination. Even SQL allows * and aggregation functions together as shown in following code snippet, that allows you to just add ROW_NUMBER() column without requiring to change much on the SELECT clause, you have to remove the ORDER BY clause from the original place and put it into ROW_NUMBER function's OVER clause or put into the outer query statement, because normally ORDER BY clause is not allowed in subquery.
select *, ROW_NUMBER() over (order by name desc) as rn from contact
Oracle
Don't mention that Oracle offers ROW_NUMBER() analytic function as with SQL SVR. But Oracle has a psuedo column ROWNUM that comes with any query. I believe it is most efficient way and can cover most cases of query pagination. Here is an example:
SELECT * FROM EMPLOYEES WHERE ROWNUM>50 AND ROWNUM<101 ORDER BY LAST_NAME
Most importantly, to add pagination to existing query you only need to add a condition on ROWNUM to the WHERE clause, no more further efforts.
More on ROW_NUMBER()
Surely with ROW_NUMBER() in SQL SVR and Oracle, you can do much more than pagination. Following example can give a heads up, it gets first five employees from all departments they have top smallest id in their departments:
SELECT * FROM
(
SELECT DEPARTMENT_NAME, LAST_NAME, EMPLOYEE_ID, ROW_NUMER()
OVER (PARTITION BY DEPARTMENT_NAME ORDER BY EMPLOYEE_ID) AS RN_ID
FROM EMPOYEES
)
WHERE RN_ID <6
Pagination with LINQ to SQL
To end this topic, I would like to have a look at LINQ to SQL. You must know it has skip(m) and take(n) feature (following code snippet). But, what it ends up translated into SQL?
var query = (from employee in Employees
select employee).Skip(10).Take(20);
Under the hood it is translated into something like this:
SELECT FIRST_NAME, LAST_NAME, EMAIL
FROM (
SELECT ROW_NUMBER() OVER (
ORDER BY FIRST_NAME, LAST_NAME, EMAIL) ROW_NUMBER,
FIRST_NAME,
LAST_NAME,
FROM EMPLOYEES
)
WHERE ROW_NUMBER BETWEEN @P0+1 AND @P0+@P1
ORDER BY ROW_NUMBER
Where @p0 is the value specified within Skip(), while @p1 is that within Take().
Friday, April 23, 2010
Implementation with DB
It comes the time to do an implementation. Implentation refers to the process to promote new development into production, In RUP definition, it belongs to phase 4 - Transition. Implementation may involve many aspects, hardware and software in broader sense. However database implementation seems more challenging because any inaproperate activity of it can crash production system, and it is hard to recover. This article talks about the concerns.
A typical scenario that I have encountered are applying database enhancement onto an older version application which is on the run. Commonly suggested approach is as following:
1. Backup the production database instance;
2. Create a new db instance with the backupped instance, making it a snapshot of the production database;
3. Run enhancement script to upgrade the database instance, that include data schema changes, script logic changes, and adding new entity or logic etc.
4. Test on upgraded instance. Once tests passed, run enhancement scripts on production database.
The basic idea is creating a production snapshot and test on that instance before applying to production.
There are ways to backup a database. Sybase provides backup SQL statement, or use designated Backup Server Compression Module.
The syntax of Backup statement:
BACKUP DATABASE TO archive-root
[ WAIT BEFORE START ]
[ WAIT AFTER END ]
[ DBFILE ONLY ]
[ TRANSACTION LOG ONLY ]
[ TRANSACTION LOG RENAME [ MATCH ] ]
[ TRANSACTION LOG TRUNCATE ]
[ ATTENDED { ON | OFF } ]
[ WITH COMMENT comment string ]
[ HISTORY { ON | OFF } ]
Sybase Backup Server Compression Module backups a database with a specific backup server, where a compression/decompression is involved along with backup and load activities. This backup approach is suggested for transaction-intensive application. To make the Backup Server work, we need to configure ip address and port through sql.ini file.
In addition, we can use DUMP statement and BCP tool to archive data. Fowllowing is a backup routine with DUMP statement:
-- Backup routine
use master
go
sp_dboption dbname, "single user", true
go
use dbname
go
checkpoint
go
dbcc checkdb (dname,skip_ncindex)
go
dbcc checkcatalog
go
dbcc checkalloc
go
use master
go
sp_dboption dbname, "single user", false
go
use dbname
go
checkpoint
go
dump tran dbname to device1
go
dump database dbname to device1
go
Dump statement can "dump" a database or transaction log that can then be used for recovery with "load" statement. So we can copy a db instance by:
1. Dump a db
2. Crreate a new db instance
3. Load the dumped file into the new instance.
In certain scenario, we need to use BCP - a powerful tool to import and export large amount of data between text file and a database. The syntex looks like the following:
bcp xx.dbo.contacts in "C:\temp\inventory.txt" -c -T
bcp xx.dbo.contacts out "C:\temp\inventory.txt" -c -T
Despite all the ways to choose based on concrete requirement, always check out the logs from the tools being used a good idea, it will tell what happened and may give direction to solve problems encountered.
A typical scenario that I have encountered are applying database enhancement onto an older version application which is on the run. Commonly suggested approach is as following:
1. Backup the production database instance;
2. Create a new db instance with the backupped instance, making it a snapshot of the production database;
3. Run enhancement script to upgrade the database instance, that include data schema changes, script logic changes, and adding new entity or logic etc.
4. Test on upgraded instance. Once tests passed, run enhancement scripts on production database.
The basic idea is creating a production snapshot and test on that instance before applying to production.
There are ways to backup a database. Sybase provides backup SQL statement, or use designated Backup Server Compression Module.
The syntax of Backup statement:
BACKUP DATABASE TO archive-root
[ WAIT BEFORE START ]
[ WAIT AFTER END ]
[ DBFILE ONLY ]
[ TRANSACTION LOG ONLY ]
[ TRANSACTION LOG RENAME [ MATCH ] ]
[ TRANSACTION LOG TRUNCATE ]
[ ATTENDED { ON | OFF } ]
[ WITH COMMENT comment string ]
[ HISTORY { ON | OFF } ]
Sybase Backup Server Compression Module backups a database with a specific backup server, where a compression/decompression is involved along with backup and load activities. This backup approach is suggested for transaction-intensive application. To make the Backup Server work, we need to configure ip address and port through sql.ini file.
In addition, we can use DUMP statement and BCP tool to archive data. Fowllowing is a backup routine with DUMP statement:
-- Backup routine
use master
go
sp_dboption dbname, "single user", true
go
use dbname
go
checkpoint
go
dbcc checkdb (dname,skip_ncindex)
go
dbcc checkcatalog
go
dbcc checkalloc
go
use master
go
sp_dboption dbname, "single user", false
go
use dbname
go
checkpoint
go
dump tran dbname to device1
go
dump database dbname to device1
go
Dump statement can "dump" a database or transaction log that can then be used for recovery with "load" statement. So we can copy a db instance by:
1. Dump a db
2. Crreate a new db instance
3. Load the dumped file into the new instance.
In certain scenario, we need to use BCP - a powerful tool to import and export large amount of data between text file and a database. The syntex looks like the following:
bcp xx.dbo.contacts in "C:\temp\inventory.txt" -c -T
bcp xx.dbo.contacts out "C:\temp\inventory.txt" -c -T
Despite all the ways to choose based on concrete requirement, always check out the logs from the tools being used a good idea, it will tell what happened and may give direction to solve problems encountered.
Thursday, February 11, 2010
Handle Dependency While Generating Deployment Script
When generating deployment script for a db schema, I usually use delete-and-create approach shown in following code. The problem I often encounter is that it sometimes fails if I put a list of this type of code together, because of dependencies between the entities to be deleted and created.
IF OBJECT_ID('dbo.TE_DECOMP_CATEGORY') IS NOT NULL
DROP TABLE dbo.TE_DECOMP_CATEGORY
go
/* ***************************************************************
* @Summary: Create TE_DECOMP_CATEGORY table
* ***************************************************************/
CREATE TABLE dbo.TE_DECOMP_CATEGORY
(
CATEGORY varchar(30) NOT NULL,
DEFINITION varchar(200) NULL,
CONSTRAINT PK_TE_DECOMP_CATEGORY
PRIMARY KEY CLUSTERED (CATEGORY)
)
GRANT DELETE ON dbo.TE_DECOMP_CATEGORY TO DFR_application
go
GRANT INSERT ON dbo.TE_DECOMP_CATEGORY TO DFR_application
go
GRANT SELECT ON dbo.TE_DECOMP_CATEGORY TO DFR_application
go
GRANT UPDATE ON dbo.TE_DECOMP_CATEGORY TO DFR_application
go
GRANT SELECT ON dbo.TE_DECOMP_CATEGORY TO read_only
go
IF OBJECT_ID('dbo.TE_DECOMP_CATEGORY') IS NOT NULL
PRINT '<<< CREATED TABLE dbo.TE_DECOMP_CATEGORY >>>'
ELSE
PRINT '<<< FAILED CREATING TABLE dbo.TE_DECOMP_CATEGORY >>>'
go
Then I realized that I have to seperate the deletion logic from the creation one in order to avoid the problem. Theoretically it won't work no matter how I order them if I keep creation and deletion as a whole.
Imagine the following dependancies circular dependency is not allowed):
A->B->C->D;
B->E->F;
C->F;
E->D;
The deletion logic should be:
DELETE A;
DELETE B;
DELETE C;
DELETE E;
DELETE D;
DELETE F;
Then the reverse used for creation logic should be good enough.
To recap, to key idea for the resolving dependency issue with deployment script is to seperated deletion logic from creation logic. I believe most database products have dependency logic with generating script, what I need is to ensure the deletion logic follow the dependency order.
IF OBJECT_ID('dbo.TE_DECOMP_CATEGORY') IS NOT NULL
DROP TABLE dbo.TE_DECOMP_CATEGORY
go
/* ***************************************************************
* @Summary: Create TE_DECOMP_CATEGORY table
* ***************************************************************/
CREATE TABLE dbo.TE_DECOMP_CATEGORY
(
CATEGORY varchar(30) NOT NULL,
DEFINITION varchar(200) NULL,
CONSTRAINT PK_TE_DECOMP_CATEGORY
PRIMARY KEY CLUSTERED (CATEGORY)
)
GRANT DELETE ON dbo.TE_DECOMP_CATEGORY TO DFR_application
go
GRANT INSERT ON dbo.TE_DECOMP_CATEGORY TO DFR_application
go
GRANT SELECT ON dbo.TE_DECOMP_CATEGORY TO DFR_application
go
GRANT UPDATE ON dbo.TE_DECOMP_CATEGORY TO DFR_application
go
GRANT SELECT ON dbo.TE_DECOMP_CATEGORY TO read_only
go
IF OBJECT_ID('dbo.TE_DECOMP_CATEGORY') IS NOT NULL
PRINT '<<< CREATED TABLE dbo.TE_DECOMP_CATEGORY >>>'
ELSE
PRINT '<<< FAILED CREATING TABLE dbo.TE_DECOMP_CATEGORY >>>'
go
Then I realized that I have to seperate the deletion logic from the creation one in order to avoid the problem. Theoretically it won't work no matter how I order them if I keep creation and deletion as a whole.
Imagine the following dependancies circular dependency is not allowed):
A->B->C->D;
B->E->F;
C->F;
E->D;
The deletion logic should be:
DELETE A;
DELETE B;
DELETE C;
DELETE E;
DELETE D;
DELETE F;
Then the reverse used for creation logic should be good enough.
To recap, to key idea for the resolving dependency issue with deployment script is to seperated deletion logic from creation logic. I believe most database products have dependency logic with generating script, what I need is to ensure the deletion logic follow the dependency order.
Tuesday, January 12, 2010
Handle Truncate Exception with T-Sql
A collegue in RBC asked me to help with his truncate exception within his very complicated Sybase reporting query.
The code that causes truncate exception and terminates execution looks like the following:
SELECT POWER(1.09000000000012, 1/65559)*1.00006
Obviously, the truncate is caused by precision setting applied implicitly. For numeric types, such as decimal, numeric, int, bigint, float, real, double, Sybase has different precision scale on them. For example, decimal value can be specifed with explicit precision when declare it as DECIMAL(precision, scale) - scale is the digits after the dot. The precision is total digits including before and after the dots. By default, decimal value has a precision of 18 digits, but can be specified less than 38 digits.
To solve the problem, We need to CAST or CONVERT the partial value data type so that make the result of an operation within the boundary. In the end, the following CAST solve his problem:
SELECT (CAST POWER(1.09000000000012, 1/65559) AS FLOAT) * 1.00006
The code that causes truncate exception and terminates execution looks like the following:
SELECT POWER(1.09000000000012, 1/65559)*1.00006
Obviously, the truncate is caused by precision setting applied implicitly. For numeric types, such as decimal, numeric, int, bigint, float, real, double, Sybase has different precision scale on them. For example, decimal value can be specifed with explicit precision when declare it as DECIMAL(precision, scale) - scale is the digits after the dot. The precision is total digits including before and after the dots. By default, decimal value has a precision of 18 digits, but can be specified less than 38 digits.
To solve the problem, We need to CAST or CONVERT the partial value data type so that make the result of an operation within the boundary. In the end, the following CAST solve his problem:
SELECT (CAST POWER(1.09000000000012, 1/65559) AS FLOAT) * 1.00006
Thursday, December 31, 2009
Sybase Temporary Table
There are seven types of table in Sybase. Aside from regular permanent table and
proxy table, there are 5 types of temporary tables. The most commonly used
temporary tables are DECLARE temporary table and CREATE temporary table.
The big difference comes that DECLARE temporary table is code-block-long, meaning
it exist only within a code block, e.g. a stored procedure; on the other side,
CREATE temporary table lives the whole session or connection.
The syntax
<declare_local_temporary_table>
::= DECLARE LOCAL TEMPORARY TABLE <table_name>
<table_element_list>
[ <commit_action> ]
<table_name> ::= <identifier>
<commit_action> ::= ON COMMIT DELETE ROWS
| ON COMMIT PRESERVE ROWS
| NOT TRANSACTIONAL
<create_local_temporary_table>
::= CREATE LOCAL TEMPORARY TABLE <table_name>
<table_element_list>
[ <commit_action> ]
Syntactically, they are almost the same except the keywords of the commands are different. Interestingly, we can even define behavior for commit and rollback actions of the temporary table. ON COMMIT DELETE ROWS, for example, lets implicitly delete all rows when commit is issued; while ON COMMIT PRESERVE ROWS doesn't delete row, just keeping rows as a normal transaction operation does.
proxy table, there are 5 types of temporary tables. The most commonly used
temporary tables are DECLARE temporary table and CREATE temporary table.
The big difference comes that DECLARE temporary table is code-block-long, meaning
it exist only within a code block, e.g. a stored procedure; on the other side,
CREATE temporary table lives the whole session or connection.
The syntax
<declare_local_temporary_table>
::= DECLARE LOCAL TEMPORARY TABLE <table_name>
<table_element_list>
[ <commit_action> ]
<table_name> ::= <identifier>
<commit_action> ::= ON COMMIT DELETE ROWS
| ON COMMIT PRESERVE ROWS
| NOT TRANSACTIONAL
<create_local_temporary_table>
::= CREATE LOCAL TEMPORARY TABLE <table_name>
<table_element_list>
[ <commit_action> ]
Syntactically, they are almost the same except the keywords of the commands are different. Interestingly, we can even define behavior for commit and rollback actions of the temporary table. ON COMMIT DELETE ROWS, for example, lets implicitly delete all rows when commit is issued; while ON COMMIT PRESERVE ROWS doesn't delete row, just keeping rows as a normal transaction operation does.
Thursday, December 10, 2009
Subquery
In SQL language, subquery is used diversely and brings desired power.
As a COLUMN
select top 1 order_id
,convert(char(10), order_date,121) order
,(select convert(char(10),max(order_date),121) from orders)order_date
,datediff(dd,OrderDate,
(select Max(OrderDate)from Northwind.dbo.Orders)) Day_Diff
from orders
where city = 'Toronto'
order by order_date DESC
In WHERE Clause
select Country,CompanyName, ContactName, ContactTitle, Phone
from Northwind.dbo.Customers
where country =
(select top 1 country
from Northwind.dbo.Customers C
join
Northwind.dbo.Orders O
on C.CustomerId = O.CustomerID
group by country
order by count(*))
In FROM Clause
select au_lname, au_fname, title from
(select au_lname, au_fname, au_id from pubs.dbo.authors
where state = 'CA') as a
join
pubs.dbo.titleauthor ta on a.au_id=ta.au_id
join
pubs.dbo.titles t on ta.title_id = t.title_id
In HAVING Clause
select pub_name, count(*) bookcnt
from pubs.dbo.titles t
join
pubs.dbo.publishers p
on t.pub_id = p.pub_id
group by pub_name
having p.pub_name in
(select pub_name from pubs.dbo.publishers where state <> 'CA')
As a COLUMN
select top 1 order_id
,convert(char(10), order_date,121) order
,(select convert(char(10),max(order_date),121) from orders)order_date
,datediff(dd,OrderDate,
(select Max(OrderDate)from Northwind.dbo.Orders)) Day_Diff
from orders
where city = 'Toronto'
order by order_date DESC
In WHERE Clause
select Country,CompanyName, ContactName, ContactTitle, Phone
from Northwind.dbo.Customers
where country =
(select top 1 country
from Northwind.dbo.Customers C
join
Northwind.dbo.Orders O
on C.CustomerId = O.CustomerID
group by country
order by count(*))
In FROM Clause
select au_lname, au_fname, title from
(select au_lname, au_fname, au_id from pubs.dbo.authors
where state = 'CA') as a
join
pubs.dbo.titleauthor ta on a.au_id=ta.au_id
join
pubs.dbo.titles t on ta.title_id = t.title_id
In HAVING Clause
select pub_name, count(*) bookcnt
from pubs.dbo.titles t
join
pubs.dbo.publishers p
on t.pub_id = p.pub_id
group by pub_name
having p.pub_name in
(select pub_name from pubs.dbo.publishers where state <> 'CA')
Wednesday, October 1, 2008
Transaction Mode and Transaction Isolation Level
Transaction Mode
Transact-SQL or unchained (default mode)
To define a transaction needs explicit statement pair of Begin Tran and Rollback or Commit Tran
Chained or ANSI-Compatible
Implicitly begin trans defore any data retrieval and data modification statements. this statements include:
delete, insert, update open, fetch, select
you must explicitly end the transaction with commit or rollback
set chained on
it will set to chained mode. @@tranchained retains 0 for unchained 1 for chained.
in different mode the following code results differently.
insert into publishers values ('9999', null, null, null)
begin transaction
delete from publishers where pub_id = `9999'
rollback transaction
Transaction Isolation Level
@@isolation returns 1, 2, 3, representing three different isolation levels:
Level 1: dirty reads.
occur when one transaction reads a row and then a second transaction modifies that row. If the first transaction rolls back the change, the information read by the second transaction becomes invalid. Transact-SQL never allows this to happen.
Level 2: non-repeatable reads(hold lock)
occur when one transaction reads a row and then a second transaction modifies that row. If the second transaction commits its change, subsequent reads by the first transaction yield different results than the original read.
Level 3: phantoms(hold lock)
occur when one transaction reads a set of rows that satisfy a search condition, and then a second transaction modifies the data (through an insert , delete , update , and so on). If the first transaction repeats the read with the same search conditions, it obtains a different set of rows.
The SQL92 standard requires that level 3 be the default isolation between transactions. This prevents "dirty reads," "non-repeatable reads," and "phantoms." To enforce this default level of isolation, Transact-SQL provides the transaction isolation level 3 option of the set statement. This instructs SQL Server to automatically apply a holdlock to all select operations in a transaction.
Transact-SQL or unchained (default mode)
To define a transaction needs explicit statement pair of Begin Tran and Rollback or Commit Tran
Chained or ANSI-Compatible
Implicitly begin trans defore any data retrieval and data modification statements. this statements include:
delete, insert, update open, fetch, select
you must explicitly end the transaction with commit or rollback
set chained on
it will set to chained mode. @@tranchained retains 0 for unchained 1 for chained.
in different mode the following code results differently.
insert into publishers values ('9999', null, null, null)
begin transaction
delete from publishers where pub_id = `9999'
rollback transaction
Transaction Isolation Level
@@isolation returns 1, 2, 3, representing three different isolation levels:
Level 1: dirty reads.
occur when one transaction reads a row and then a second transaction modifies that row. If the first transaction rolls back the change, the information read by the second transaction becomes invalid. Transact-SQL never allows this to happen.
Level 2: non-repeatable reads(hold lock)
occur when one transaction reads a row and then a second transaction modifies that row. If the second transaction commits its change, subsequent reads by the first transaction yield different results than the original read.
Level 3: phantoms(hold lock)
occur when one transaction reads a set of rows that satisfy a search condition, and then a second transaction modifies the data (through an insert , delete , update , and so on). If the first transaction repeats the read with the same search conditions, it obtains a different set of rows.
The SQL92 standard requires that level 3 be the default isolation between transactions. This prevents "dirty reads," "non-repeatable reads," and "phantoms." To enforce this default level of isolation, Transact-SQL provides the transaction isolation level 3 option of the set statement. This instructs SQL Server to automatically apply a holdlock to all select operations in a transaction.
Monday, September 29, 2008
Implement Transaction Integrity with Sybase 12.5
By S.Luann
With Sybase 12.5, (SQL Server 2005 is the same), failing of a statement in a batch doesn't necessarily abort the batch. This behavior forces developer to check the running status of each DML statement that makes changes to the database so that to determine whether to commit or rollback a transaction.
In order to investigate this issue, I prepared two tables (List 1 and List 2) and a stored procedure (List 3).
List 1.
CREATE TABLE dbo.test
(
id decimal(38,0) NOT NULL,
name varchar(2048) NOT NULL,
CONSTRAINT test_340193231
FOREIGN KEY (id)
REFERENCES dbo.test2 (id)
)
List 2.
CREATE TABLE dbo.test2
(
id decimal(38,0) NOT NULL,
CONSTRAINT test2_3241931742
PRIMARY KEY NONCLUSTERED (id)
)
List 3.
CREATE PROCEDURE p_test
AS
BEGIN
BEGIN TRAN
INSERT INTO dbo.test
(
id,
name
)
VALUES
(100, NULL) -- CAUSE EXCEPTION BECAUSE NAME COLUMN IS NOT NULLABLE
INSERT INTO dbo.test
(
id,
name
)
VALUES
(100, 'THIS WORKS') -- THIS ONE WORKS
COMMIT TRAN
END
In table test2 only one record with id equals to 100.
Executing this stored procedure in List 3 results an exception saying null is not allowed, and no records has been inserted. No transaction voilation happend.
Then run List 4, It causes an exception saying referencial integrity problem, and there is a record inserted to table test. It violates the transaction integrity! This behavior tells that a run-time error doesn't stop the batch running. To test this behavior, I execute List 5. which uses a parameter rather than a static value, It inserted a record and violates the transaction as well.
List 4.
CREATE PROCEDURE p_test
AS
BEGIN
BEGIN TRAN
INSERT INTO dbo.test
(
id,
name
)
VALUES
(99, 'Non null value') -- CAUSE EXCEPTION BECAUSE ONLY 100 WON'T CAUSE REFERENCIAL VIOLATION
INSERT INTO dbo.test
(
id,
name
)
VALUES
(100, 'THIS WORKS') -- THIS ONE WORKS
COMMIT TRAN
END
List 5
CREATE PROCEDURE p_test
(
@name VARCHAR(50) = NULL
)
AS
BEGIN
BEGIN TRAN
INSERT INTO dbo.test
(
id,
name
)
VALUES
(100, @name) -- CAUSE EXCEPTION if the parameter is passing null
INSERT INTO dbo.test
(
id,
name
)
VALUES
(100, 'THIS WORKS') -- THIS ONE WORKS
COMMIT TRAN
END
GO
EXECUTE p_test
GO
Why in the first case it doesn't violate the transaction? I traced step by step and found the exception in the first case comes from compile-time, because it doesn't even run into the procedure. It is understandable though that a static value will be checked out if violating any rules and constraints.
The Conclusion
To get the transaction work, I check the error status after each DML statement which makes changes to the database. See List 6.
List 6.
CREATE PROCEDURE p_test
(
@name VARCHAR(50) = NULL
)
AS
BEGIN
BEGIN TRAN
INSERT INTO dbo.test
(
id,
name
)
VALUES
(100, @name) -- CAUSE EXCEPTION if the parameter is passing null
IF(@@error<>0) GOTO fail
INSERT INTO dbo.test
(
id,
name
)
VALUES
(100, 'THIS WORKS') -- THIS ONE WORKS
IF(@@error<>0) GOTO fail
COMMIT TRAN
RETURN(0)
fail:
ROLLBACK TRAN
RETURN(1)
END
GO
EXECUTE p_test
GO
EXECUTE p_test 'any name'
GO
With Sybase 12.5, (SQL Server 2005 is the same), failing of a statement in a batch doesn't necessarily abort the batch. This behavior forces developer to check the running status of each DML statement that makes changes to the database so that to determine whether to commit or rollback a transaction.
In order to investigate this issue, I prepared two tables (List 1 and List 2) and a stored procedure (List 3).
List 1.
CREATE TABLE dbo.test
(
id decimal(38,0) NOT NULL,
name varchar(2048) NOT NULL,
CONSTRAINT test_340193231
FOREIGN KEY (id)
REFERENCES dbo.test2 (id)
)
List 2.
CREATE TABLE dbo.test2
(
id decimal(38,0) NOT NULL,
CONSTRAINT test2_3241931742
PRIMARY KEY NONCLUSTERED (id)
)
List 3.
CREATE PROCEDURE p_test
AS
BEGIN
BEGIN TRAN
INSERT INTO dbo.test
(
id,
name
)
VALUES
(100, NULL) -- CAUSE EXCEPTION BECAUSE NAME COLUMN IS NOT NULLABLE
INSERT INTO dbo.test
(
id,
name
)
VALUES
(100, 'THIS WORKS') -- THIS ONE WORKS
COMMIT TRAN
END
In table test2 only one record with id equals to 100.
Executing this stored procedure in List 3 results an exception saying null is not allowed, and no records has been inserted. No transaction voilation happend.
Then run List 4, It causes an exception saying referencial integrity problem, and there is a record inserted to table test. It violates the transaction integrity! This behavior tells that a run-time error doesn't stop the batch running. To test this behavior, I execute List 5. which uses a parameter rather than a static value, It inserted a record and violates the transaction as well.
List 4.
CREATE PROCEDURE p_test
AS
BEGIN
BEGIN TRAN
INSERT INTO dbo.test
(
id,
name
)
VALUES
(99, 'Non null value') -- CAUSE EXCEPTION BECAUSE ONLY 100 WON'T CAUSE REFERENCIAL VIOLATION
INSERT INTO dbo.test
(
id,
name
)
VALUES
(100, 'THIS WORKS') -- THIS ONE WORKS
COMMIT TRAN
END
List 5
CREATE PROCEDURE p_test
(
@name VARCHAR(50) = NULL
)
AS
BEGIN
BEGIN TRAN
INSERT INTO dbo.test
(
id,
name
)
VALUES
(100, @name) -- CAUSE EXCEPTION if the parameter is passing null
INSERT INTO dbo.test
(
id,
name
)
VALUES
(100, 'THIS WORKS') -- THIS ONE WORKS
COMMIT TRAN
END
GO
EXECUTE p_test
GO
Why in the first case it doesn't violate the transaction? I traced step by step and found the exception in the first case comes from compile-time, because it doesn't even run into the procedure. It is understandable though that a static value will be checked out if violating any rules and constraints.
The Conclusion
To get the transaction work, I check the error status after each DML statement which makes changes to the database. See List 6.
List 6.
CREATE PROCEDURE p_test
(
@name VARCHAR(50) = NULL
)
AS
BEGIN
BEGIN TRAN
INSERT INTO dbo.test
(
id,
name
)
VALUES
(100, @name) -- CAUSE EXCEPTION if the parameter is passing null
IF(@@error<>0) GOTO fail
INSERT INTO dbo.test
(
id,
name
)
VALUES
(100, 'THIS WORKS') -- THIS ONE WORKS
IF(@@error<>0) GOTO fail
COMMIT TRAN
RETURN(0)
fail:
ROLLBACK TRAN
RETURN(1)
END
GO
EXECUTE p_test
GO
EXECUTE p_test 'any name'
GO
Monday, September 22, 2008
ESP - Extentded Stored Procedure
ESP is a stored procedure but instead contain SQL statements and control-of-flow statements, it executes compiled binary from dynamic link libary(dll).
ESP is used to perform tasks outside RDBMS, email notification for example, in response to an event.
RPC - Remote Stored Procedure Call
Code in a database can call stored procedure in another database.
The following code snippet is excerpted from Sybase
sp_addserver westcoastsrv, ASEnterprise, hqsales
begin transaction rpc_tran1
update sales set commission=300 where salesid="12OZ"
exec westcoastsrv.salesdb..recordsalesproc
commit rpc_tran1
or
sp_addobjectdef salesrec,
"westcoastsrv.salesdb..sales", "table"
begin transaction cis_tran1
update sales set commission=300 where salesid="12OZ"
update salesrec set commission=300 where salesid="12OZ"
commit cis_tran1
ESP is used to perform tasks outside RDBMS, email notification for example, in response to an event.
RPC - Remote Stored Procedure Call
Code in a database can call stored procedure in another database.
The following code snippet is excerpted from Sybase
sp_addserver westcoastsrv, ASEnterprise, hqsales
begin transaction rpc_tran1
update sales set commission=300 where salesid="12OZ"
exec westcoastsrv.salesdb..recordsalesproc
commit rpc_tran1
or
sp_addobjectdef salesrec,
"westcoastsrv.salesdb..sales", "table"
begin transaction cis_tran1
update sales set commission=300 where salesid="12OZ"
update salesrec set commission=300 where salesid="12OZ"
commit cis_tran1
Subscribe to:
Posts (Atom)