Warning: Undefined array key "HTTP_REFERER" in /www/wwwroot/prod/www.enjoyasp.net/wp-content/plugins/google-highlight/google-hilite.php on line 58

在合并复制中,sqlserver会不断的同步架构,加上sch-m锁,即使架构没有更改,这就业务系统查询所用的sch-s冲突。为解决此问题,可暂停架构的同步,不过引起另一问题时,更新的存储过程等不会自动分发下去了,变通之道,在同步之前,开启架构复制同步,同步完成后,关闭。

—-开启架构同步
EXEC sp_changemergepublication   @publication = 'BRM-NewCOPY',   @property = N'replicate_ddl', @value = 1;
GO

—-更新存储过程…

—-关闭架构同步
EXEC sp_changemergepublication   @publication = 'BRM-NewCOPY',   @property = N'replicate_ddl', @value = 0;
GO

参考:http://msdn.microsoft.com/en-us/library/ms152562%28v=sql.90%29.aspx

By default, the following schema changes made at a Publisher running SQL Server 2005 are replicated to all SQL Server Subscribers:

  • ALTER TABLE
  • ALTER VIEW
  • ALTER PROCEDURE
  • ALTER FUNCTION
  • ALTER TRIGGER


Warning: Undefined array key "HTTP_REFERER" in /www/wwwroot/prod/www.enjoyasp.net/wp-content/plugins/google-highlight/google-hilite.php on line 58

尽管 IDENTITY 属性在一个表内自动进行行编号,但具有各自标识符列的各个表可以生成相同的值。这是因为 IDENTITY 属性仅在使用它的表上保证是唯一的。如果应用程序必须生成在整个数据库或世界各地所有网络计算机的所有数据库中均为唯一的标识符列,请使用 ROWGUIDCOL 属性、uniqueidentifier 数据类型和 NEWID 函数。

使用 ROWGUIDCOL 属性定义 GUID 列时,请注意下列几点:

  • 一个表只能有一个 ROWGUIDCOL 列,且必须通过使用 uniqueidentifier 数据类型定义该列。
     
  • 数据库引擎 不会自动为该列生成值。若要插入全局唯一值,请为该列创建 DEFAULT 定义来使用 NEWID 函数生成全局唯一值。
     
  • 在设置 ROWGUIDCOL 属性后,通过使用 $ROWGUID 关键字可以在选择列表中引用该列。这与通过使用 $IDENTITY 关键字可以引用 IDENTITY 列的方法类似。
     
  • OBJECTPROPERTY 函数可用于确定一个表是否具有 ROWGUIDCOL 列,COLUMNPROPERTY 函数可用于确定 ROWGUIDCOL 列的名称。
     
  • 由于 ROWGUIDCOL 属性不强制唯一性,因此应使用 UNIQUE 约束来保证插入到 ROWGUIDCOL 列中的值是唯一的。
     

在创建表时创建新的标识符列


Warning: Undefined array key "HTTP_REFERER" in /www/wwwroot/prod/www.enjoyasp.net/wp-content/plugins/google-highlight/google-hilite.php on line 58

SQL Server selects the deadlock victim based on the following criteria:
1. Deadlock priority – the assigned DEADLOCK_PRIORITY of a given session determines
the relative importance of it completing its transactions, if that session is
involved in a deadlock. The session with the lowest priority will always be chosen as
the deadlock victim. Deadlock priority is covered in more detail later in this chapter.
2. Rollback cost – if two or more sessions involved in a deadlock have the same
deadlock priority, then SQL Server will choose as the deadlock victim the session that
has lowest estimated cost to roll back.
3,when occurs,it selects one of the participants as a victim, cancels that spid’s current batch, and rolls backs his transaction in order to let the other spids continue with their work.  The deadlock victim will get a 1205 error:  
Transaction (Process ID 52) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.

There are circumstances (for example, a critical report that performs a long running
SELECT that must complete even if it is the ideal deadlock victim) where it may be
preferable to specify which process will be chosen as the deadlock victim in the event
of a deadlock, rather than let SQL Server decide based purely on the cost of rollback. As
demonstrated in Listing 7.18, SQL Server offers the ability to set, at the session or batch
level, a deadlock priority using the SET DEADLOCK PRIORITY option.

— Set a Low deadlock priority
SET DEADLOCK_PRIORITY LOW ;
GO
— Set a High deadlock priority
SET DEADLOCK_PRIORITY HIGH ;
GO
— Set a numeric deadlock priority
SET DEADLOCK_PRIORITY 2 ;-10到10

SQL Server 2005 and 2008 however, have three named deadlock priorities; Low, Normal, and High, as well
as a numeric range from -10 to +10, for fine-tuning the deadlock priority of different
operations.
 


Warning: Undefined array key "HTTP_REFERER" in /www/wwwroot/prod/www.enjoyasp.net/wp-content/plugins/google-highlight/google-hilite.php on line 58

1,RAID 0RAID 0, most commonly known as striping, provides improved I/O rates (in terms of IOPS) by striping the data across multiple drives, allowing the read and write operations to be shared amongst the drives inside the array.This level of RAID provides the best performance for both read and write operations, butprovides no redundancy or protection against data loss. In the event of a single disk failure in the array, all of the data is lost.
one task A,have a+b, for raid0:risk 1:a,risk 2:b ,so:
n disks: n(read+write)

2,RAID 1, most commonly known as mirroring, provides protection against the loss of
data from a single disk by mirroring the writes to a second disk but doesn't provide added write performanceto the system, since the maximum throughput for the array is limited to the I/O capacity
of a single disk
one task A,have a+b, for raid1: write: risk 1:a+b,risk 2:a+b ,so:
n disks: (n read + n/2 write)

3,RAID 10
RAID 10, or RAID 1+0, is a nested RAID level known as a "striped pair of mirrors." It
provides redundancy by first mirroring each disk, using RAID 1, and then striping those
mirrored disks, with RAID 0, to improve performance.
one task A,have a+b, for raid10: write: risk 1:a,risk11:a, risk 2:b ,risk:22 b
so:
n disks: (n read + n/2 write)

4,RAID 01
RAID 01 or RAID 0+1, is a nested RAID level also known as "mirrored pairs of striped
disks." In a RAID 0+1 configuration, the nested levels are the opposite of the RAID 1+0,
with the disks first being striped in a RAID 0 and then mirrored using RAID 1. However,
this type of configuration only allows for a single disk loss from one side of the array,
since a single disk failure in a RAID 0 array causes that array to fail. A loss of a single disk
in both of the RAID 0 striped arrays would result in total data loss.

5,RAID 5 (and 6)
RAID 5 is commonly known as "striping with parity;" the data is striped across multiples
disks, as per RAID 0, but parity data is stored in order to provide protection from single
disk failure. The minimum number of disks required for a RAID 5 array is three. The
parity blocks containing the parity data are staggered across the stripes inside the array
n disks: ((n-1) (read +write)

Raid 0 –每个磁盘的I/O计算= (读+写) /磁盘个数

  Raid 1 –每个磁盘的I/O计算= [读+(2*写)]/2

  Raid 5 –每个磁盘的I/O计算= [读+(4*写)]/磁盘个数

  Raid 10 –每个磁盘的I/O计算= [读+(2*写)]/磁盘个数

由上知,raid5与raid10比较,raid5对单块磁盘的I/O要求较高,常采用raid10
虽然raid5奇偶性是实现容错的一种不错的方式。但是从磁盘写入来说代价高昂。也就是说对于每一个IOP写请求,RAID5需要4个IOPS。

    为什么需要这么高写入代价的过程如下:

  •     读取原始数据(1 iop)
  •     读取当前奇偶数据(1 iop)
  •     比较当前数据和新写入请求
  •     基于数据差异计算新的奇偶值
  •     写入新数据(1 iop)
  •     写入新的奇偶值(1 iop)


6,choose what?
Data files:
 1,depends heavily on the read-to-write ratio for the .SQL Server tracks the I/O usage of the database files for an instance and makes this information available in the sys.dm_io_virtual_
file_stats Dynamic Management Function. While this information can be used
to determine if your overall workload is heavier for reads vs. writes.
For a database that is primarily read-only, RAID 5 or RAID 6 can offer good read performance.
2,RAID 5 or 6 arrays are commonly used for data warehouses, or for storing data where write latency doesn't impact overall system performance.
3,For OLTP implementations of heavy-write databases, RAID 1+0 provides the best performance

Log files:
    Since the transaction log is written to sequentially, RAID 1 can be used in most situations.the transaction log for each database should be located on dedicated physical disks

tempdb:As a general rule, the tempdb database files should be physically separate from the user
data files and transaction log files, on a dedicated disk array. Since tempdb is a writeheavy
database, RAID 1 or RAID 1+0 are usually the configurations best able to support
the concurrent workload of tempdb.since tempdb is used for temporary storage only, Solid State Disks and even RamDisks can be used to significantly improve the I/O characteristics of the tempdb database

 

http://blog.tianya.cn/blogger/post_show.asp?idWriter=0&Key=0&BlogID=854352&PostID=20759437


Warning: Undefined array key "HTTP_REFERER" in /www/wwwroot/prod/www.enjoyasp.net/wp-content/plugins/google-highlight/google-hilite.php on line 58

显示当前打开的状态
DBCC TRACESTATUS()

–在当前连接打开3205
DBCC TRACEON (3205)
–全局打开1222
DBCC TRACEON (3205-1)


Warning: Undefined array key "HTTP_REFERER" in /www/wwwroot/prod/www.enjoyasp.net/wp-content/plugins/google-highlight/google-hilite.php on line 58
When looking at the wait statistics being tracked by SQL Server, it's important that these
wait types are eliminated from the analysis, allowing the more problematic waits in the
system to be identified. One of the things I do as a part of tracking wait information is to
maintain a script that filters out the non-problematic wait types, as shown in Listing 1.2.'

Troubleshooting SQL Server
A Guide for the Accidental DBA

--1,wait types,if for i/o,then sys.dm_io_virtual_file_stats
--2,sqlserver perfmon counts view

SELECT TOP 10
wait_type ,
max_wait_time_ms wait_time_ms ,
signal_wait_time_ms ,
wait_time_ms - signal_wait_time_ms AS resource_wait_time_ms ,
100.0 * wait_time_ms / SUM(wait_time_ms) OVER ( )
AS percent_total_waits ,
100.0 * signal_wait_time_ms / NULLIF(SUM(signal_wait_time_ms) OVER ( ),0)
AS percent_total_signal_waits ,
100.0 * ( wait_time_ms - signal_wait_time_ms )
/ SUM(wait_time_ms) OVER ( ) AS percent_total_resource_waits
FROM sys.dm_os_wait_stats
WHERE wait_time_ms > 0 -- remove zero wait_time
AND wait_type NOT IN -- filter out additional irrelevant waits
( 'SLEEP_TASK', 'BROKER_TASK_STOP', 'BROKER_TO_FLUSH',
'SQLTRACE_BUFFER_FLUSH','CLR_AUTO_EVENT', 'CLR_MANUAL_EVENT',
'LAZYWRITER_SLEEP', 'SLEEP_SYSTEMTASK', 'SLEEP_BPOOL_FLUSH',
'BROKER_EVENTHANDLER', 'XE_DISPATCHER_WAIT', 'FT_IFTSHC_MUTEX',
'CHECKPOINT_QUEUE', 'FT_IFTS_SCHEDULER_IDLE_WAIT',
'BROKER_TRANSMITTER', 'FT_IFTSHC_MUTEX', 'KSOURCE_WAKEUP',
'LAZYWRITER_SLEEP', 'LOGMGR_QUEUE', 'ONDEMAND_TASK_QUEUE',
'REQUEST_FOR_DEADLOCK_SEARCH', 'XE_TIMER_EVENT', 'BAD_PAGE_PROCESS',
'DBMIRROR_EVENTS_QUEUE', 'BROKER_RECEIVE_WAITFOR',
'PREEMPTIVE_OS_GETPROCADDRESS', 'PREEMPTIVE_OS_AUTHENTICATIONOPS',
'WAITFOR', 'DISPATCHER_QUEUE_SEMAPHORE', 'XE_DISPATCHER_JOIN',
'RESOURCE_QUEUE' )
ORDER BY wait_time_ms DESC


CXPACKET
Often indicates nothing more than that certain queries are executing with parallelism;
CXPACKET waits in the server are not an immediate sign of problems

SOS_SCHEDULER_YIELD
This may indicate that the server is under CPU press

THREADPOOL
requiring an increase in the number of
CPUs in the server, to handle a highly concurrent workload, or it can be a sign of
blocking


LCK_*
These wait types signify that blocking is occurring in the system and that sessions
have had to wait to acquire a lock of a specific typ
This problem can be investigated further using the information
in the sys.dm_db_index_operational_stats


PAGEIOLATCH_*, IO_COMPLETION, WRITELOG
These waits are commonly associated with disk I/O bottlenecks, though the root
cause of the problem may be,PAGEIOLATCH_* waits are
specifically associated with delays in being able to read or write data from the database
files. WRITELOG waits are related to issues with writing to log files.
These waits
should be evaluated in conjunction with the virtual file statistics as well as Physical
Disk performance counters


PAGELATCH_*
A lot of times
PAGELATCH_* waits are associated with allocation contention issues. One of
the best-known allocations issues associated with PAGELATCH_* waits occurs in
tempdb when the a large number of objects are being created and destroyed in
tempdb and the system experiences contention on the Shared Global Allocation
Map (SGAM), Global Allocation Map (GAM), and Page Free Space (PFS) pages in the
tempdb database.




LATCH_*
Determining the
specific latch class that has the most accumulated wait time associated with it can
be found by querying the sys.dm_os_latch_stats DMV.

ASYNC_NETWORK_IO
This wait is often incorrectly attributed to a network bottleneck,In fact, the most
common cause of this wait is a client application that is performing row-by-row
processing of the data being streamed from SQL Server as a result set (client accepts
one row, processes, accepts next row, and so on). Correcting this wait type generally
requires changing the client-side code so that it reads the result set as fast as possible,
and then performs processing



After fixing any problem in the server, in order to validate that the problem has indeed
been fixed, the wait statistics being tracked by the server can be reset using the code in
Listing 1.3.

DBCC SQLPERF('sys.dm_os_wait_stats', clear)


--------------disk I/O bottleneck.-----------
will provide cumulative physical I/O statistics, the number of reads
and writes on each data file, and the number of reads and writes on each log file, for
the various databases in the instance, from which can be calculated the ratio of reads to
writes. This also shows the number of I/O stalls and the stall time associated with the
requests, which is the total amount of time sessions have waited for I/O to be completed
on the file.
whether heavy-read or heavy-write, and at the average latency associated with the I/O, as this will
direct further investigation and possible solutions.


SELECT DB_NAME(vfs.database_id) AS database_name ,

vfs.database_id ,
vfs.FILE_ID ,
io_stall_read_ms / NULLIF(num_of_reads, 0) AS avg_read_latency ,
io_stall_write_ms / NULLIF(num_of_writes, 0)
AS avg_write_latency ,
io_stall / NULLIF(num_of_reads + num_of_writes, 0)
AS avg_total_latency ,
num_of_bytes_read / NULLIF(num_of_reads, 0)
AS avg_bytes_per_read ,
num_of_bytes_written / NULLIF(num_of_writes, 0)
AS avg_bytes_per_write ,
vfs.io_stall ,
vfs.num_of_reads ,
vfs.num_of_bytes_read ,
vfs.io_stall_read_ms ,
vfs.num_of_writes ,
vfs.num_of_bytes_written ,
vfs.io_stall_write_ms ,
size_on_disk_bytes / 1024 / 1024. AS size_on_disk_mbytes ,
physical_name
FROM sys.dm_io_virtual_file_stats(NULL, NULL) AS vfs
JOIN sys.master_files AS mf ON vfs.database_id = mf.database_id
AND vfs.FILE_ID = mf.FILE_ID
ORDER BY avg_total_latency DESC



SELECT * FROM sys.dm_os_performance_counters

DECLARE @CounterPrefix NVARCHAR(30)
SET @CounterPrefix = CASE WHEN @@SERVICENAME = 'MSSQLSERVER'
THEN 'SQLServer:'
ELSE 'MSSQL$' + @@SERVICENAME + ':'
END ;
-- Capture the first counter set
SELECT CAST(1 AS INT) AS collection_instance ,
[OBJECT_NAME] ,
counter_name ,
instance_name ,
cntr_value ,
cntr_type ,
CURRENT_TIMESTAMP AS collection_time
INTO #perf_counters_init
FROM sys.dm_os_performance_counters
WHERE ( OBJECT_NAME = @CounterPrefix + 'Access Methods'
AND counter_name = 'Full Scans/sec'
)
OR ( OBJECT_NAME = @CounterPrefix + 'Access Methods'
AND counter_name = 'Index Searches/sec'
)
OR ( OBJECT_NAME = @CounterPrefix + 'Buffer Manager'
AND counter_name = 'Lazy Writes/sec'
)
OR ( OBJECT_NAME = @CounterPrefix + 'Buffer Manager'
AND counter_name = 'Page life expectancy'
)
OR ( OBJECT_NAME = @CounterPrefix + 'General Statistics'
AND counter_name = 'Processes Blocked'
)
OR ( OBJECT_NAME = @CounterPrefix + 'General Statistics'
AND counter_name = 'User Connections'
)
OR ( OBJECT_NAME = @CounterPrefix + 'Locks'
AND counter_name = 'Lock Waits/sec'
)
OR ( OBJECT_NAME = @CounterPrefix + 'Locks'
AND counter_name = 'Lock Wait Time (ms)'
)

OR ( OBJECT_NAME = @CounterPrefix + 'SQL Statistics'
AND counter_name = 'SQL Re-Compilations/sec'
)
OR ( OBJECT_NAME = @CounterPrefix + 'Memory Manager'
AND counter_name = 'Memory Grants Pending'
)
OR ( OBJECT_NAME = @CounterPrefix + 'SQL Statistics'
AND counter_name = 'Batch Requests/sec'
)
OR ( OBJECT_NAME = @CounterPrefix + 'SQL Statistics'
AND counter_name = 'SQL Compilations/sec'
)
-- Wait on Second between data collection
WAITFOR DELAY '00:00:01'
-- Capture the second counter set
SELECT CAST(2 AS INT) AS collection_instance ,
OBJECT_NAME ,
counter_name ,
instance_name ,
cntr_value ,
cntr_type ,
CURRENT_TIMESTAMP AS collection_time
INTO #perf_counters_second
FROM sys.dm_os_performance_counters
WHERE ( OBJECT_NAME = @CounterPrefix + 'Access Methods'
AND counter_name = 'Full Scans/sec'
)
OR ( OBJECT_NAME = @CounterPrefix + 'Access Methods'
AND counter_name = 'Index Searches/sec'
)
OR ( OBJECT_NAME = @CounterPrefix + 'Buffer Manager'
AND counter_name = 'Lazy Writes/sec'
)
OR ( OBJECT_NAME = @CounterPrefix + 'Buffer Manager'
AND counter_name = 'Page life expectancy'
)
OR ( OBJECT_NAME = @CounterPrefix + 'General Statistics'
AND counter_name = 'Processes Blocked'
)
OR ( OBJECT_NAME = @CounterPrefix + 'General Statistics'
AND counter_name = 'User Connections'
)
OR ( OBJECT_NAME = @CounterPrefix + 'Locks'
AND counter_name = 'Lock Waits/sec'
)
OR ( OBJECT_NAME = @CounterPrefix + 'Locks'
AND counter_name = 'Lock Wait Time (ms)'
)
OR ( OBJECT_NAME = @CounterPrefix + 'SQL Statistics'
AND counter_name = 'SQL Re-Compilations/sec'
)
OR ( OBJECT_NAME = @CounterPrefix + 'Memory Manager'
AND counter_name = 'Memory Grants Pending'
)
OR ( OBJECT_NAME = @CounterPrefix + 'SQL Statistics'
AND counter_name = 'Batch Requests/sec'
)
OR ( OBJECT_NAME = @CounterPrefix + 'SQL Statistics'
AND counter_name = 'SQL Compilations/sec'
)
-- Calculate the cumulative counter values
SELECT i.OBJECT_NAME ,
i.counter_name ,
i.instance_name ,
CASE WHEN i.cntr_type = 272696576
THEN s.cntr_value - i.cntr_value
WHEN i.cntr_type = 65792 THEN s.cntr_value
END AS cntr_value
FROM #perf_counters_init AS i
JOIN #perf_counters_second AS s
ON i.collection_instance + 1 = s.collection_instance
AND i.OBJECT_NAME = s.OBJECT_NAME
AND i.counter_name = s.counter_name
AND i.instance_name = s.instance_name
ORDER BY OBJECT_NAME
-- Cleanup tables
DROP TABLE #perf_counters_init
DROP TABLE #perf_counters_second


The two Access Methods counters provide information about the ways that tables
are being accessed in the database. The most important one is the Full Scans/sec
counter, which can give us an idea of the number of index and table scans that are
occurring in the SYSTEM

In general, I want the number
of Index Searches/sec to be higher than the number of Full Scans/sec by a factor
of 800–1000. If the number of Full Scans/sec is too high, refer to Chapter 5, Missing
Indexes to determine if there are missing indexes in the database, resulting in excess
I/O operations.

Page Life Expectancy (PLE) which is the number of seconds a page will remain
in the data cache
the question VALUE <= (max server memory/4)*300s,
Writes/sec, which are page flushes from the buffer cache outside of the normal CHECKPOINT
process, then the server is most likely experiencing data cache memory pressure,
which will also increase the disk I/O being performed by the SQL SERVER,At this point
the Access Methods counters should be investigated to determine if excessive table or
index scans are being performed on the SQL SERVER


The General Statistics\Processes Blocked, Locks\Lock Waits/sec,
If these counters return a value other
than zero, over repeated collections of the data, then blocking is actively occurring in one
of the databases , Blocking should be used to
troubleshoot the problems further
 sp_configure ('show advanced options')
 
 The higher the number of SQL Compilations/
sec in relation to the Batch Requests/sec, the more likely the SQL Server is
experiencing an ad hoc workload that is not making optimal using of plan caching. The
higher the number of SQL Re-Compilations/sec in relation to the Batch Requests/
sec, the more likely it is that there is an inefficiency in the code design that is forcing
a recompile of the code being executed in the SQL Server. In either case, investigation
of the Plan Cache
 show advanced options
 
 The Memory Manager\Memory Grants Pending performance counter provides
information about the number of processes waiting on a workspace memory grant in
the instance.If this counter has a high value,there may be query inefficiencies in the instance that are causing excessive
memory grant requirements, for example, large sorts or hashes that can be resolved by
tuning the indexing or queries being executed




Warning: Undefined array key "HTTP_REFERER" in /www/wwwroot/prod/www.enjoyasp.net/wp-content/plugins/google-highlight/google-hilite.php on line 58

1,打开跟踪,写入日志
DBCC TRACEON (3605,1222,-1) –3605写入errorlog,1222死锁
DBCC TRACEON(1222,-1) /若在启动时,加-T 1222
同时,可开profile中的Deadlock graph跟踪,以图形化
2,分析安装目录下生成的日志
   1)确定死锁的资源, 据对象不同查看方式不同。
DBCC TRACEON(3604)

DBCC PAGE(8,1,96,1)
DBCC TRACEOFF(3604)

死锁产生的前提:双方互占有了对方所需求的资源,若资源并不必要,可过滤掉。
资源不必要:1)扫描了对方的资源,扫描过的就会加锁,避免被扫描到,如加索引等。
3,死锁处理方法:
 1)从性能出发,优化sql
 2)从业务逻辑出发,看是否可去掉对死锁资料的关联。
 3)若还是不能解决,死锁是正常情况,避免不了,但可避免输出1205错误信息给客户端,方法就是加上try catch,可以等一会儿再重新执行。
如:

/*避免死锁显示给客户端 */
DECLARE @retries INT ;
SET @retries = 4 ;
WHILE ( @retries > 0 )
    BEGIN
        BEGIN TRY
            BEGIN TRANSACTION ;
            -- place sql code here
            SET @retries = 0 ;
            COMMIT TRANSACTION ;
        END TRY
        BEGIN CATCH
            -- Error is a deadlock
            IF ( ERROR_NUMBER() = 1205 )
                SET @retries = @retries - 1 ;
                -- Error is not a deadlock
            ELSE
                BEGIN
                    DECLARE @ErrorMessage NVARCHAR(4000) ;
                    DECLARE @ErrorSeverity INT ;
                    DECLARE @ErrorState INT ;
                    SELECT  @ErrorMessage = ERROR_MESSAGE() ,
                            @ErrorSeverity = ERROR_SEVERITY() ,
                            @ErrorState = ERROR_STATE() ;
                    -- Re-Raise the Error that caused the problem
                    RAISERROR (@ErrorMessage, -- Message text.
                    @ErrorSeverity, -- Severity.
                    @ErrorState -- State.
                    ) ;
                    SET @retries = 0 ;
                END
            IF XACT_STATE() <> 0
                ROLLBACK TRANSACTION ;
        END CATCH ;
    END ;
GO

/*死锁模拟
1,建立数据
2,连续两个事务
*/
drop table Employee_Demo_Heap
go
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [Employee_Demo_Heap](
	[EmployeeID] [int] NOT NULL,
	[NationalIDNumber] [nvarchar](15) NOT NULL,
	[ContactID] [int] NOT NULL,
	[LoginID] [nvarchar](256) NOT NULL,
	[ManagerID] [int] NULL,
	[Title] [nvarchar](50) NOT NULL,
	[BirthDate] [datetime] NOT NULL,
	[MaritalStatus] [nchar](1) NOT NULL,
	[Gender] [nchar](1) NOT NULL,
	[HireDate] [datetime] NOT NULL,
	[ModifiedDate] [datetime] NOT NULL   DEFAULT (getdate()),
 CONSTRAINT [PK_Employee_EmployeeID_Demo_Heap] PRIMARY KEY nonCLUSTERED 
(
	[EmployeeID] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, 
IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]
GO

CREATE NONCLUSTERED INDEX [IX_Employee_ManagerID_Demo_Heap] ON [Employee_Demo_Heap] 
(
	[ManagerID] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
GO
CREATE NONCLUSTERED INDEX [IX_Employee_ModifiedDate_Demo_Heap] ON [Employee_Demo_Heap] 
(
	[ModifiedDate] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
GO
insert into Employee_Demo_Heap
select [EmployeeID] ,
	[NationalIDNumber] ,
	[ContactID] ,
	[LoginID] ,
	[ManagerID],
	[Title] ,
	[BirthDate] ,
	[MaritalStatus] ,
	[Gender] ,
	[HireDate] ,
	[ModifiedDate] from HumanResources.Employee
go


现在就用下面这组脚本模拟出一个死锁来。在一个连接里,运行下面的语句。反复开启事务。在这个事务里,先修改一条NationalIDNumber=‘480951955’的记录,然后再把它查询出来。做完以后,提交事务。
set nocount on
go
while 1=1
begin
begin tran
update dbo.Employee_Demo_Heap
set BirthDate = getdate()
where NationalIDNumber = '480951955'
select * from dbo.Employee_Demo_Heap
where NationalIDNumber = '480951955'
commit tran
end
	在另外一个连接里,也运行这些语句。唯一的差别是这次修改和查询的是另一条NationalIDNumber = ‘407505660’的记录。
set nocount on
go
while 1=1
begin
begin tran
update dbo.Employee_Demo_Heap
set BirthDate = getdate()
where NationalIDNumber = '407505660'
select * from dbo.Employee_Demo_Heap 
where NationalIDNumber = '407505660'
commit tran
end



Warning: Undefined array key "HTTP_REFERER" in /www/wwwroot/prod/www.enjoyasp.net/wp-content/plugins/google-highlight/google-hilite.php on line 58

SCOPE_IDENTITY 返回为当前会话和当前作用域中的任何表最后生成的标识值。
@@IDENTITY 返回为当前会话的所有作用域中的任何表最后生成的标识值。
IDENT_CURRENT 返回为任何会话和任何作用域中的特定表最后生成的标识值。
 

若有触发器,且触发器有插入其它表,则@@IDENTITY返回的是其它表的自增值,而不是自己所想要的,故要
用SCOPE_IDENTITY(),IDENT_CURRENT有并发问题,不考虑。

@@identity不受并发影响。因为他只受当前会话限制。
但可能会受到当前会话其它作用域的限制。
比如ta上建有insert触发器,触发器内容为在ta上插入时,同时向tb插入。
那么这时@@identity就会有误,它取到的不是ta上的新增标识值,而是tb的。
或者存储过程嵌套也可能产生问题。
如果希望受当前会话,并且只取当前作用域的值的话那么使用scope_identity()函数。

 

如果插入的表未有插入触发器插入其它表记录生成新的id列,
用@@identity无问题,各有各的连接,互不相干.
用SCOPE_IDENTITY 回插入到同一作用域中的 IDENTITY 列内的最后一个 IDENTITY 值
用scope_identity也可

IDENT_CURRENT 不受作用域和会话的限制,而受限于指定的表。IDENT_CURRENT 返回为任何会话和作用域中的特定表所生成的值
ident_current会受并发影响.


Warning: Undefined array key "HTTP_REFERER" in /www/wwwroot/prod/www.enjoyasp.net/wp-content/plugins/google-highlight/google-hilite.php on line 58

1,存储过程在程序慢而在Management Studio快,分析查询计划,发现生成了两个执行计划,说明二者用的不是同一个查询计划,因参数嗅探问题引发此种情况。
2,产生不同执行计划原因是:引起执行计划重新编译的原因之一是上下文环境发生改变,SET开关。这些开关会影响语句执行的行为,甚至带来不同的结果,而sqlserver要另生成新的编译计划。而在Management Studio中,默认打开了SET ARITHABORT ON 在查询执行过程中发生溢出或被零除错误时,终止查询,而程序中是不打开的,不同的SET开关,造成不同编译计划的生成,而不是重用。
最终设置:看存储过程用的什么set,将Manage Studio工具-选项-查询执行-高级 打开或关掉工具
3,故要重用程序所生成的执行计划,方式是:1)确定SET设置 2)在当前中设置上下文环境

--查找SET开关
方法1:
SELECT sql,setopts
FROM sys.syscacheobjects
WHERE OBJECT_NAME(objid,DB_ID()) = 'frmUser_sel'

方法2:
SELECT plan_handle,usecounts,pvt.set_options, pvt.sql_handle
FROM (
    SELECT plan_handle, epa.attribute, epa.value ,usecounts
    FROM sys.dm_exec_cached_plans 
    CROSS APPLY sys.dm_exec_sql_text(plan_handle) qt
    OUTER APPLY sys.dm_exec_plan_attributes(plan_handle) AS epa
    WHERE cacheobjtype = 'Compiled Plan'
    AND  OBJECT_NAME(objectid,dbid) = 'frmUser_sel' AND dbid = DB_ID()
    ) AS ecpa 
PIVOT (MAX(ecpa.value) FOR ecpa.attribute IN ("set_options", "sql_handle")) AS pvt;
GO



--将获取的值传入,获取具体SET项
declare @set_options INT
SET @set_options= 251
 
IF ( (1 & @options) = 1 ) PRINT 'DISABLE_DEF_CNST_CHK' 
IF ( (2 & @options) = 2 ) PRINT 'IMPLICIT_TRANSACTIONS' 
IF ( (4 & @options) = 4 ) PRINT 'CURSOR_CLOSE_ON_COMMIT' 
IF ( (8 & @options) = 8 ) PRINT 'ANSI_WARNINGS' 
IF ( (16 & @options) = 16 ) PRINT 'ANSI_PADDING' 
IF ( (32 & @options) = 32 ) PRINT 'ANSI_NULLS' 
IF ( (64 & @options) = 64 ) PRINT 'ARITHABORT' 
IF ( (128 & @options) = 128 ) PRINT 'ARITHIGNORE'
IF ( (256 & @options) = 256 ) PRINT 'QUOTED_IDENTIFIER' 
IF ( (512 & @options) = 512 ) PRINT 'NOCOUNT' 
IF ( (1024 & @options) = 1024 ) PRINT 'ANSI_NULL_DFLT_ON' 
IF ( (2048 & @options) = 2048 ) PRINT 'ANSI_NULL_DFLT_OFF' 
IF ( (4096 & @options) = 4096 ) PRINT 'CONCAT_NULL_YIELDS_NULL' 
IF ( (8192 & @options) = 8192 ) PRINT 'NUMERIC_ROUNDABORT' 
IF ( (16384 & @options) = 16384 ) PRINT 'XACT_ABORT'

–查看当前option select @@option 参考:Set Options

4,上述说明了相同的存储过程参数,但产生不同的执行计划的原因,但关于速度不一样呢,原因是发生了参数嗅探,解决参数嗅探方法有:1)重新编译存储过程 sp_recompile 'sp' 2)在关键语句加:option(recompile) 3)指定值OPTION (OPTIMIZE FOR (@pid = 897))


Warning: Undefined array key "HTTP_REFERER" in /www/wwwroot/prod/www.enjoyasp.net/wp-content/plugins/google-highlight/google-hilite.php on line 58

ISNULL:
check_expression

将被检查是否为 NULL 的表达式。check_expression 可以为任何类型。

replacement_value

当 check_expression 为 NULL 时要返回的表达式。replacement_value 必须是可以隐式转换为 check_expresssion 类型的类型。

ISNULL:若前一个为NULL,时,会将后一个按前一个数据类型进行转换!
测试:

DECLARE @s VARCHAR(10),@result VARCHAR(50)
           SET @result = REPLICATE(1,50)
           SELECT LEN(@result)
           SELECT LEN(ISNULL(@s,@result))


Warning: Undefined array key "HTTP_REFERER" in /www/wwwroot/prod/www.enjoyasp.net/wp-content/plugins/google-highlight/google-hilite.php on line 58

表空间优化原因:表行数据越少,一个页存储的行也就越多,I/O就会减少,提高了性能。
方法有:1,基表用int,自增,表中多用int
        2,存储大的数据放在另外表中。

效果评估:查看优化前与后每页存储的平均行数,越大越好。

SELECT m.tablename 表名,m.indexname 索引名,m.total_pages 总页数,m.ROWS 总行数,m.AvgPerPage 每页平均行数 ,
CONVERT(VARCHAR(5),8060/AvgPerPage) + '字节'   每行平均字节数
FROM (
    SELECT OBJECT_NAME(idx.OBJECT_ID) tablename,idx.NAME indexname,
    SUM(total_pages) total_pages,SUM(ptt.rows) ROWS,SUM(rows)/SUM(total_pages) AvgPerPage
    FROM sys.indexes idx
    JOIN sys.partitions ptt ON idx.index_id = ptt.index_id AND idx.object_id = ptt.object_id
    JOIN sys.allocation_units au ON au.container_id = ptt.partition_id
    WHERE idx.object_id = object_id('bdorder')
    GROUP BY OBJECT_NAME(idx.OBJECT_ID),idx.name
)m



Warning: Undefined array key "HTTP_REFERER" in /www/wwwroot/prod/www.enjoyasp.net/wp-content/plugins/google-highlight/google-hilite.php on line 58

--EXEC SyncSpToLinkServer @LinkServer='remoteserver1', @Name ='perfmonFix'
--向链接服务器tempdb上同步存储过程、函数、视图
--关键点:在链接服务器上执行sql:EXEC('exec tempdb.dbo.sp_executesql @statement =N''select 1''') AT remoteserver1
CREATE PROCEDURE SyncSpToLinkServer
@LinkServer VARCHAR(255),
@name VARCHAR(255) --存储过程、函数、视图名称
AS
BEGIN
 DECLARE @sql VARCHAR(max)
 SET @sql = '
 DECLARE @IsDrop BIT,@IsCreate BIT,@type VARCHAR(100),@tmp VARCHAR(1000)
 SET @IsDrop = 0 --默认不删除链接服务器上的存储过程
 SET @IsCreate = 1 --默认在链接服务器上创建存储过程

 SELECT
 @IsCreate = CASE WHEN  rp.modify_date <= lp.modify_date THEN 1 ELSE 0 END,--链接服务器存储过程是否不是最新
 @IsDrop = CASE WHEN  rp.modify_date <= lp.modify_date THEN 1 ELSE 0 END,
 @type = CASE(rp.type)
  WHEN ''V'' THEN ''VIEW''
  WHEN ''P'' THEN ''PROCEDURE''
  WHEN ''FN'' THEN ''FUNCTION''
  WHEN ''IF'' THEN ''FUNCTION''
  WHEN ''TF'' THEN ''FUNCTION'' END
 FROM ' + QUOTENAME(@LinkServer) +'.tempdb.sys.objects rp
 JOIN sys.objects  lp ON rp.name = lp.name
 WHERE rp.name = ' + QUOTENAME(@name,'''') +'
 
 SELECT CASE(@IsDrop) WHEN 1 THEN ''drop'' ELSE ''nodrop'' END,
 CASE(@IsCreate) WHEN 1 THEN ''create'' ELSE ''nochanged'' END,@type

 IF @IsDrop = 1
 BEGIN
  
  --SET @tmp = ''exec tempdb.dbo.sp_executesql @statement =N'''' DROP ''  + @type +   ' + '  ' + QUOTENAME(@name,'''') +'
  SET @tmp = ''exec tempdb.dbo.sp_executesql @statement =N''  + ''''''  DROP ''  + @type +   ' + '''  '   + @name +'''  + ''''''''
  PRINT @TMP
  EXEC(@tmp) AT  ' + QUOTENAME(@LinkServer) +'
 END
 IF @IsCreate = 1
 BEGIN
  DECLARE @CreateSql VARCHAR(max), @ExecuteSQL VARCHAR(MAX)
  SELECT  @CreateSql =  REPLACE(definition,'''''''','''''''''''')--处理引号
  from sys.sql_modules where object_id=object_id(' + QUOTENAME(@name,'''') +' )
  IF @CreateSql IS NOT NULL
  BEGIN
   SET @ExecuteSQL = ''EXEC tempdb.dbo.sp_executesql @statement =N'''''' + @CreateSql +''''''''
   EXEC(@ExecuteSQL)  AT ' + QUOTENAME(@LinkServer) +'
  END
 END '

 --PRINT @sql
 EXEC (@SQL)
END


Warning: Undefined array key "HTTP_REFERER" in /www/wwwroot/prod/www.enjoyasp.net/wp-content/plugins/google-highlight/google-hilite.php on line 58

1,sys.objects是用户级的对象存储,如用户的表,约束,主键,函数,存储过程,视图,以及2000版本的不常用的系统基表
2,system_objects是系统级的对象存储,包含系统存储过程,如sp_helptext,动态视图sys.dm等
3,sys.objects + sys.system_objects = sys.all_objects


Warning: Undefined array key "HTTP_REFERER" in /www/wwwroot/prod/www.enjoyasp.net/wp-content/plugins/google-highlight/google-hilite.php on line 58

总结页结构:
1 ,查看方法:
查找所有数据的结构,里面会有文件号,页号等信息
DBCC IND( Business ,Fixed ,- 1)
通过上述得到文件号,页号,执行以下得到数据结构
DBCC TRACEON( 3604 )
DBCC PAGE( Business ,1 , 3394, 1 )
 
2 ,明细解释:
页大小为 KB ,字节,由页面报头,数据行及行偏移矩阵组成。
DBCC PAGE 输出
分为个主要段落
      1  BUFFER: 给定页面的缓冲信息,指一种内存的结构,仅当前页面处于内存中时才有意义。
      2  PAGE HEADER: 该页面上的所有报头字段信息,占据前个字节。
      3  DATA:每条记录的信息,占据字节。
      4  OFFSET TABLE 显示了行偏移矩阵的内容,占据字节。
DATA 结构说明:
通过上述 sql 查看偏移量。
之后是具体的固定长度字符内容
STATUS Bits A     2 
STATUS Bits A     2 
固定字段大小       4 
固定字段数据        固定字段大小 4
固定字段数量       4 
空位图       
变长字段数量       4 
变长字段偏移量     2 *变长字段数量
变长字段数据
注:个字节用两位存储
16 进制转换成具体数据方法:,若表示的是数字如固定字段长度或字段本身是整型,则要从后向前转换(因数字字段是先存储低字节的),若是
字符,则用每两位的表示转成 ASCII 码即可。
 
3 ,观察数据结构作用:
1 )字段长度改小,观察前后数据变化,发现是一样的,原数据是不会删除的,只是在查询时不显示出来,在插入的时候增加范围限制。
这也解释了为什么表字段改小后,数据量没有什么变化,因为它并不执行删除,以保证高效率。由此了解数据库空间增大后,数据删除并没有相应的空间减少的原因,若想真正删除,重建聚集索引即可。
 
2 )将字段长度用 ALTER方式改大,观察前后数据变化,发现多出了一列,原有的数据列也保留,这也解释了在表设计情况下,进行结构的更改如字段长度变大变小,最后让
SQLSERVER 生成SQL 语句,为什么是建立新表,重建,而不是直接生成 alter 的原因,因为 sqlserver知道直接 alter 的话原数据是不会删除的。
还有一种情况是对列进行位置的调动,用 sqlserver 生成语句,也是建新表操作的,原因是新增加的列总是获得目前最高的 column_id 的下一个值,而列的
顺序也是按 column_id 进行的。
即:用 alter 更改表时,为尽可能避免对每一行数据的改动(要检查每一行,写日志,比较费时耗资源),一般原行的数据是保留的。
字段更小:原数据并不执行删除,保留,只是缩小入口与出口。
字段更大,直接新建一列,而不是进行字段的扩充。
更改字段的顺序,字段的顺序一经建立就是固定的,因为是据 column_id 来排序的,故更改表的顺序要重建表。

 


Warning: Undefined array key "HTTP_REFERER" in /www/wwwroot/prod/www.enjoyasp.net/wp-content/plugins/google-highlight/google-hilite.php on line 58
1,Current Disk Queue Length:长度要小于等于 2*物理磁盘数,不然,说明有任务在等待,磁盘处于繁忙之中。

2,实施方案:
     1)内存:64位。
     2)磁盘:放在磁盘柜中
     3)日志文件放在另一磁盘上,日志文件与数据文件分开在不同的物理磁盘上。
     4)radio,radi5
     5) 对于大表,可进行分区,并放到多个磁盘上。 
3,索引,对数据进行了排序,为存储这种排序结构,选择用B-树来实现,在B-树的分支节点中存储着最小值,因为是有序的数据,这样在查找的时候就可用折半查找等方法来检索数据,比如通过非聚集索引查找数据,再通过指示的键值去表中检索数据,并不是一步就到聚集索引所在的行的,而是遍历B-树结构,找到指示的叶子节点,然后再取出相关数据。
4,Sql Server中所有的聚集索引都是唯一的,如果在建立时没有指定unique关键字, SqlServer会在需要时通过向记录插入一个唯一标识符在内部保证索引的唯一性,该某一标识符是一个4字节长的值,需要的时机指的是出现重复值时。
5,非聚集索引中的叶级别中有索引字段的值,同时还包含了一个引向表数据的书签,书签格式按表中有无聚集索引分为两种:1,Key值。聚集索引的字段值。  2,RID:文件号:页号:槽号。
从中可以看出,1)若表中有聚集索引,那么当聚集索引的值发生更改时,会影响到非聚集索引的值改变,非聚集索引依赖于聚集索引,故聚集索引的选择很重要,最好能够建立在很少更新的字段上。
                       2)通过非聚集索引查找数据,若数据还需要通过书签去查找,那么会有一部分开销,这就天生决定了非聚集索引的适用场合,即:非聚集索引适用在大表取少量数据时最有用,因为书签查找并不会造成很大的开销,并且,它特别适用于 select要求的字段,非聚集索引都包含的情况,即索引覆盖,这种情况下,性能有时会比聚集索引还要高效,因为非聚集索引引用的字段少的,占用的物理页数比表的页数少很多,故相应的检索就很快。但是在大范围取数据时,它的书签查找就会成为制约,此时,不如直接来个聚集扫描会更快一些,故 sqlserver的执行计划中会看到,本来有索引的地方不用,而使用了聚集索引扫描方式。
6,索引使用过程中的几种类型:
     1)覆盖索引: select 中的字段,非聚集索引中都包含。 
     2)索引交叉:应用了多个索引进行如where条件的过滤,然后进行合并。
     3)索引连接:一个用于过滤,从另一索引中取要求的字段,利用了索引空间小的特点,避免了书签查找表。
7,查看索引:sp_helpindex frmuser

8,索引使用状况统计:

Select  t.object_id, ix.name,user_updates, user_seeks,user_scans,t.name,ix.name,sc.name
    from sys.dm_db_index_usage_stats ius
    JOIN sys.tables t ON ius.object_id = t.object_id
    JOIN sys.indexes ix ON t.object_id = ix.object_id AND ix.index_id = ius.index_id
    JOIN sys.index_columns ixc ON t.object_id = ixc.object_id AND ix.index_id = ixc.index_id
    JOIN sys.columns sc ON t.object_id = sc.object_id AND ixc.column_id = sc.column_id
where t.name = 'bdorder'    --user_updates > 10 * (user_seeks+user_scans) 就是应drop的

 

 


Warning: Undefined array key "HTTP_REFERER" in /www/wwwroot/prod/www.enjoyasp.net/wp-content/plugins/google-highlight/google-hilite.php on line 58

据统计信息来作预估,如执行计算预估的返回行
查看统计信息:DBCC SHOW_STATISTICS ('bdship','IX_bdship_2')
建立索引后,若想看其好坏,用DBCC SHOW_STATISTICS ('bdship','IX_bdship_2')查看重复率,越小越好。
统计信息维护:
    1,表格从没有数据变成有数据时,如索引重建。
    2,小表(500行以下数据),当统计信息第一个字段累计变化大于500以后
    3,大表(500行以上),当统计信息第一个字段累计变化大于500+(20%*表格数据总量)时,即在表五分之一以上的数据发生变化时,会自动更新统计信息。
    4,手工更新,如下:
手工更新统计信息:
UPDATE STATISTICS  mdDept –表
UPDATE STATISTICS  mdDept IX_mdDept –表上索引

1,索引要建立在重复少的字段上才有意义,试想若字段全相同,那做索引就没有意义。
索引建立后可通过DBCC SHOW_STATISTICS ('bdship','IX_bdship_2')来判断,
第一个表的Density
若Density接近于1,说明重复率非常小,索引建立的很好
若Density小于0.1,说明重复很大,建不建立索引影响不大,某些时候会用扫描方式替代索引查找

第三个表:
RANGE_HI_KEY:
每组数据的最大值
RANGE_ROWS:每组数据区间行数,不含最大值。即:(上一行值,当前行值)
EQ_ROWS:等于RANGE_HI_KEY的值的数量
DISTINCT_RANGE_ROWS:每组数据非重复数量
AVG_RANGE_ROWS:重复值的平均数量,不含上限值
=RANGE_ROWS/DISTINCT_RANGE_ROWS
这样,通过给定的一个条件,就能预估大概有多少行数据,有多少M数据。


Warning: Undefined array key "HTTP_REFERER" in /www/wwwroot/prod/www.enjoyasp.net/wp-content/plugins/google-highlight/google-hilite.php on line 58

执行计划编译是需要时间的,为了提高效用率,是要缓存的,可用如下方式查看:
–查看执行计划的缓存

SELECT UseCounts,RefCounts, Cacheobjtype, Objtype,dbid,
ISNULL(DB_NAME(dbid),'ResourceDB') AS DatabaseName, TEXT AS SQL
FROM sys.dm_exec_cached_plans
CROSS APPLY sys.dm_exec_sql_text(plan_handle)
ORDER BY dbid,usecounts DESC;

 SELECT cacheobjtype,objtype,sql FROM sys.syscacheobjects a
 WHERE cacheobjtype = 'compiled plan' AND dbid = 7


 
–清空执行计划缓存
DBCC FREEPROCCACHE
GO
–从缓存中清除给定数据库的计划缓存
DBCC FLUSHPROCINDB(<dbid>)
GO

执行计划重新编译放到缓存中的时机:
1,表或视图的架构发生了更改,索引发生了改变造成统计信息发生更改。
2,运行了sp_recompile
3,执行计划被清除,如服务器重启,服务重启,放在内存中的执行计划就会被清空。
4,缓存会用LRU算法找出很少使用的,然后从内存中删除,在sys.dm_exec_query_stats中有一字段creation_time标识计划的创建时间,一般一、两个小时就会重新建立一份的。
可通过观察以下事件看缓存的变更。SP:CacheInsert SP:CacheMiss SP:CacheHit

执行计划要保持重用,以提高利用率,可以查出经常重新编译的sql以做优化,当然重用并不一定好,因为执行计划不适合,如参数化问题等。

sys.dm_exec_query_stats 有一个栏位plan_generation_num,计划编译次数,可用此来分析最常编译的计划。
select top 25
    plan_generation_num,
    SUBSTRING(qt.text,qs.statement_start_offset/2+1,
        (case when qs.statement_end_offset = -1
        then DATALENGTH(qt.text)
        else qs.statement_end_offset end -qs.statement_start_offset)/2 + 1)
        as stmt_executing,
    qt.text,
    execution_count,
    sql_handle,
    dbid,
    db_name(dbid) DBName,
    objectid,
    object_name(objectid,dbid) ObjectName
from sys.dm_exec_query_stats as qs
    Cross apply sys.dm_exec_sql_text(sql_handle) qt
where plan_generation_num >1
--AND qs.last_execution_time >='2011-08-28 10:00' 限定时间
order by plan_generation_num desc


Warning: Undefined array key "HTTP_REFERER" in /www/wwwroot/prod/www.enjoyasp.net/wp-content/plugins/google-highlight/google-hilite.php on line 58

1,
  SCAN   Seek
堆  Table Scan  无
聚集索引 Clustered Index Scan Clustered Index Seek
非聚集索引 Index Scan  Index Seek
非聚集索引扫描产生的原因:因Index只包含了表的一小部分字段,占用的页面数量会比表格本身要小得多,故扫描索引表就比聚集索引扫描要高得多。
前提是索引中包含了所要求的字段,但为什么没用索引查找呢,可能是在此字段上运行了运算,或此字段是联合索引的非首项。
2,扫描一定比查找慢吗?不一定
构造场景:订单明细表, DetailID聚集索引,OrderID非聚集索引。返回产品,操作人。
利用OrderID过滤,那么正常流程是先通过OrderID非聚集索引查找满足条件的字段,因要返回产品,操作人等信息故
要通过lookup查到聚集索引上的产品,操作人。注意这里面有一个lookup.现在假设同一个OrderID对应的明细非常多,那么在
lookup阶段要花费大量的时间,大大超过在OrderID查找带来的性能提升,这种情况下,还不如直接来个聚集索引扫描快一些。
故在实际中,会发现应用非聚集索引查找的地方换成了聚集索引扫描,因为此种情况下,扫描来得还快一些,即非聚集索引存在并不一定使用,
若出现这种情况,请检查非聚集索引是否有大量重复,看能否消除。


Warning: Undefined array key "HTTP_REFERER" in /www/wwwroot/prod/www.enjoyasp.net/wp-content/plugins/google-highlight/google-hilite.php on line 58

1,查看执行计划方式:
1)以行表方式返回执行计划信息(树状结构,自上而下,自左而右查看,通过TotalSubtreeCost查看开销),不执行sql语句
set showplan_all on
go
select * from AWBuildVersion
set showplan_all off

2)XML方式返回执行计划信息,不执行sql语句
set showplan_XML on
select * from AWBuildVersion
set showplan_XML off

3)以行表方式返回执行计划信息,执行sql语句,因为执行了sql语句,故在返回的执行计划结果中包含了每一步执行后返回的行数(表头的rows字段),可与
预估的行数做比较,分析执行计划的效果。
set statistics profile on
select * from AWBuildVersion
set statistics profile off
4)对于执行计划,分析其是否恰当,首先要看预估行数 estimaterows与rows是否相同,若差别较大,差几千,那么说明预估出问题,
原因是统计信息不对,这时可通过重建索引来更新统计信息。(重建索引的好处:1,更新了统计信息,执行计划预估更加正确,2,处理了数据碎片,查找速度提升。)
第二就是查看totalsubtreecost开销。

执行计划结果表信息:
ROWS:执行计划的每一步返回的实际行数。
Executes:执行计划的每一步被运行了多少次:若此值很大,又发生在nested loop中,要看是否应重新编译,以使用Hash JOIN
StmtText:执行计划的具体内容。
EstimateRows:预估的每一步返回的行数。要与ROWS做比对,以分析预估是否正确。
EstimateIO:每一步预估的IO开销。
EstimateCPU:每一步预估的CPU开销。
TotalSubtreeCost:据EstimateIO与EstimateCPU通过某种计算公式,计算出每一步执行计划子树的cost,包含自己的cost和它的所有下层步骤的cost总和。
Warnings:SqlServer执行每一步时遇到的警告,如某一步没有统计信息运行预估等。
Parallel:是否使用了并行的执行计划。


2,JOIN连接种类
1)Nested Loops JOIN:对于A表中满足条件的每一行,遍历B表,在执行计划信息中,以Nested包括,共有两行,每行执行过滤后,以Nested进行连接。
                     选过滤后连接。
强制用: inner loop/merge/hash join
优点:不需要SQLSERVER为JOIN建立额外的数据结构,也不用占用tempdb空间。
不足:复杂度是两表行数相乘,适用于两表数据集比较小的情况下。B表最好有索引。
2)Merge Join:从A表中取一条记录,B表中也取一条,比较是否相等,若相等,则两行连接起来返回,若不等,则舍掉小的那个值,
接顺序取下一个较大的值进行比较,直到A与B表其中之一遍历完。
由此定义可以:1,Merge Join要求两表是排序过的,若没有排序,则会执行先排序,故附加作用是返回的结果都是有序的。
              2,只能进行值相等的运算,ON条件后为=。
              3,时间复杂度是每表符合条件的行数,它适用于大数据量的情况。
              4,为防止数据有重复而被过掉的结果,会对B表已遍历过的放到tempdb中,以备查询,称之为many-to-many
          故由上知,它的复杂度低,若表中无索引,要排序。它还需要额外的many-to-many,不过若表字段限定unique,则相当于告诉sqlserver
这是唯一的,不会重复,那么也就不会用many-to-many了。
3)Hash Join:以哈希算法为主的连接方法,分为两步,第一步,在内存或tempdb中对B表建立hash表,第二步,将另一表值代入查找。适用于
大数据量的情况,特别是无索引这种的无序数据。它建立hash表,故要汪消耗内存,要进行hash匹配,要消耗CPU.

对照表如下:
  Nested Loop Joion Merge Join Hash Join
适用场景 小数据量  数据量中等 数据量大
ON是否要求相等 不需要   需要  需要
是否使用内存 不需要   不使用  需要使用内存建立hash表。
是否使用tempdb 不需要   many-to-many用 使用建立hash表

3, 其它运算符
1)流运算:Stream Aggregation(将数据集排成一个队列以后做运算),如执行聚合函数、Group By时,会出现流聚合。
2) 哈希匹配:Hash Aggregation(类似于Hash Join,需要SQL Server先建立Hash表,然后才能做运算)如执行聚合函数时,数据量大时,会出现哈希匹配。
3)连接:Concatenationunion all与union:会有一个连接操作,名为Concatenation,将数据集连接起来,不过,union在Concatenation之后,会再做一步排序(以便易于分组)与分组,以取出唯一,比较消耗资源。
4)并行计算:Parallelism:当sqlserver发现某个要处理的数据集比较大,cost比较高时,若服务器有多个cpu,sqlserver就会尝试使用
并行的方法,将一个数据集分成若干个,由若干个线程同时处理.

—————————————-参数化问题———————————————————
参数问题
调用存储过程时,执行计划会重用,但因数据的不同,传入不同的数据,可能会造成现有执行计划不符合实际,造成效果慢,此时用
sql构造反而快的结果出现。
1,存储过程的参数分为两类:1,是传入参数,2是内部参数
1,传入参数,若执行过程的执行计划与拿出来执行得到的执行计划不同,可能就发生了参数化问题,比如第一次执行时是nested loop,
执行计划缓存起来,下次查询的数据范围很大,应用hash join的,但因重用了执行计划,仍用nested loop 执行。由此知,参数问题只会
发生在一些表格里的数据分布不均匀,或者用户传入的参数很不均匀的情况下,如:大部分客户查询的是一天的数据,但有一个查一年的
数据就会出现问题,常见的是该用hash join的用了nested join.
解决之道:1)动态sql执行。
2)强制重新编译:
(1)存储过程级:with recompile :在创建存储过程时加入,那么整个存储过程会实时编译,效率低一些。
(2)语句级:option(recompile):在语句后加入,比较精细,只有加了这个语句的sql才会重新编译执行计划,整个存储过程不会重新编译,故效率会高一些,
并且如果有if else,并用没有经过此语句,也不会重新编译。

2,对入内部参数,是在执行过程时才能得到的,在编译时并不知道它的大小,会根据表格数据量预估一个中间值,性能不是最好也不是最坏。For example, consider a column X, containing 10 rows
with values 1, 2, 3, 3, 3, 3, 3, 4, 5, 5. The optimizer would always estimate that a "WHERE
X=value" query will return 2 rows (total number of rows divided by number of distinct
values)

—————————————总结—————————————————————————–
查看执行计划
SET statistics profile on
1)比较rows与estimaterows是否差别较大,
若较大,原因是1)统计信息不对,这时可通过重建索引来更新统计信息。(重建索引的好处:1,更新了统计信息,执行计划预估更加正确,2,处理了数据碎片,查找速度提升。)
2)Executes若Executes值很大,又发生在nested loop中,不适合自己的执行计划,多发生在存储过程中,此时要强制重新编译 option(recompile)
3)查看totalsubtreecost开销,该建索引建索引。
4)若是扫描而不是查找,原因是lookup查找主键开销很大,不如直接扫描来得快,此种情况下看能否消除select中的字段数,避免lookup。
5)清空缓存:dbcc freeproccache


Warning: Undefined array key "HTTP_REFERER" in /www/wwwroot/prod/www.enjoyasp.net/wp-content/plugins/google-highlight/google-hilite.php on line 58
/*
用途:将sql语句传来的参数#化,以得到标准的sql语句
使用举例:SELECT dbo.SQLSig('select * from t1 where t1.name=2',1000)
*/

IF OBJECT_ID('dbo.SQLSig', 'FN') IS NOT NULL
  DROP FUNCTION dbo.SQLSig;
GO

CREATE FUNCTION dbo.SQLSig 
  (@p1 NTEXT, @parselength INT = 4000)
RETURNS NVARCHAR(4000)

-- Strips query strings
AS
BEGIN 
  DECLARE @pos AS INT;
  DECLARE @mode AS CHAR(10);
  DECLARE @maxlength AS INT;
  DECLARE @p2 AS NCHAR(4000);
  DECLARE @currchar AS CHAR(1), @nextchar AS CHAR(1);
  DECLARE @p2len AS INT;

  SET @maxlength = LEN(RTRIM(SUBSTRING(@p1,1,4000)));
  SET @maxlength = CASE WHEN @maxlength > @parselength 
                     THEN @parselength ELSE @maxlength END;
  SET @pos = 1;
  SET @p2 = '';
  SET @p2len = 0;
  SET @currchar = '';
  set @nextchar = '';
  SET @mode = 'command';

  WHILE (@pos <= @maxlength)
  BEGIN
    SET @currchar = SUBSTRING(@p1,@pos,1);
    SET @nextchar = SUBSTRING(@p1,@pos+1,1);
    IF @mode = 'command'
    BEGIN
      SET @p2 = LEFT(@p2,@p2len) + @currchar;
      SET @p2len = @p2len + 1 ;
      IF @currchar IN (',','(',' ','=','<','>','!')
        AND @nextchar BETWEEN '0' AND '9'
      BEGIN
        SET @mode = 'number';
        SET @p2 = LEFT(@p2,@p2len) + '#';
        SET @p2len = @p2len + 1;
      END 
      IF @currchar = ''''
      BEGIN
        SET @mode = 'literal';
        SET @p2 = LEFT(@p2,@p2len) + '#''';
        SET @p2len = @p2len + 2;
      END
    END
    ELSE IF @mode = 'number' AND @nextchar IN (',',')',' ','=','<','>','!')
      SET @mode= 'command';
    ELSE IF @mode = 'literal' AND @currchar = ''''
      SET @mode= 'command';

    SET @pos = @pos + 1;
  END
  RETURN @p2;
END
GO