Gerard's codebase

This is a personal repository of code snippets and information.

Over the years I have generated lots of little sub programs and stored away acres of useful code snippets. The problem is always to locate them.

Even more time wasting is forgetting how to do simple things when you havnt developed in an environment for a few years (or even a few months)

My new years resolution is to start putting them up in a common place as I produce them. (thanks google)

They are handy for me and, with a bit of a clean up and documentation, they might be handy for others if they wander in here.

Gerard 2008

Tuesday, November 18, 2008

SQL- Strpping out time from DateTime - going back to weekdays

-- locate last sunday
DECLARE @Date As DATETIME
DECLARE @CurDay AS INTEGER

SET @CurDay = DATEPART(weekday,GETDATE())
SET @Date =
CASE
WHEN 1 = @CurDay THEN GETDATE()
WHEN 2 = @CurDay THEN DATEADD(DAY,-1,GETDATE())
WHEN 3 = @CurDay THEN DATEADD(DAY,-2,GETDATE())
WHEN 4 = @CurDay THEN DATEADD(DAY,-3,GETDATE())
WHEN 5 = @CurDay THEN DATEADD(DAY,-4,GETDATE())
WHEN 6 = @CurDay THEN DATEADD(DAY,-5,GETDATE())
WHEN 7 = @CurDay THEN DATEADD(DAY,-6,GETDATE())
ELSE GETDATE()
END
--Strip out the time part
SET @Date = CAST((STR( YEAR( GETDATE() ) ) + '/' + STR( MONTH( GETDATE() ) ) + '/' + STR( DAY( GETDATE() ) ) )AS DATETIME )
Print @Date

Avoid cursors in SQL Server with these methods to loop over records

This article copied from http://searchsqlserver.techtarget.com/tip/0,289483,sid87_gci1339242,00.html?track=NL-414&ad=672066&asrc=EM_NLT_5051592&uid=6482800#

Matthew Schroeder
11.17.2008
Rating: --- (out of 5)


Digg This! StumbleUpon Toolbar StumbleUpon Bookmark with Delicious Del.icio.us










Many articles have beaten up on SQL Server cursors -- database objects that manipulate data in a set on a row-by-row basis -- and I want to add my name to the list of people who wish cursors had never been introduced. But, unfortunately, cursors are a fact of life. Problems with cursors include extending locks, their inability to cache execution plans and CPU/RAM overhead. Many T-SQL programmers and DBAs do not know how to successfully loop over records without the need for cursors. In this tip, I'll share some alternatives to cursors that provide looping functionality.

Method 1: Temp table with identity column

In the first approach, we will use a temp table with an identity column added to allow for row-by-row selection. If you're performing an INSERT/UPDATE/DELETE, be sure to use the explicit transactions. This vastly reduces the load on your log file by committing per loop, and it prevents huge rollbacks in the case of failure.

set nocount on
declare @i int
--iterator
declare @iRwCnt int
--rowcount
declare @sValue varchar(100)

set @i = 1
--initialize

create table #tbl(ID int identity(1,1), Value varchar(100))

insert into #tbl(Value)
select name
from master..sysdatabases (nolock)

set @iRwCnt = @@ROWCOUNT
--SCOPE_IDENTITY() would also work

create clustered index idx_tmp on #tbl(ID) WITH FILLFACTOR = 100
/*

Always do this after the insert, since it's faster to add the index in bulk than to update the index as you write into the temp table. Since you know the data in this column, you can set the fill factor to 100% to get the best read times.

*/
while @i <= @iRwCnt begin select @sValue = Value from #tbl where ID = @i
--begin tran
print 'My Value is ' + @sValue
--replace with your operations on this value
--commit tran

set @i = @i + 1
end
drop table #tbl

Method 2: Temp table without ID

In the second approach, we use a temp table without an identity column and simply grab the top row to process, then loop until we find no more rows to process. If you're performing an INSERT/UPDATE/DELETE, again, be sure to use the explicit transactions to vastly reduce the load on your log file by committing per loop, which prevents huge rollbacks in the case of failure.

set nocount on

declare @i int
--iterator
declare @iRwCnt int
--rowcount
declare @sValue varchar(100)
set @i = 1
--initialize

create table #tbl(Value varchar(100))

insert into #tbl(Value)
select name
from master..sysdatabases (nolock)

set @iRwCnt = @@ROWCOUNT
--SCOPE_IDENTITY() would also work

create clustered index idx_tmp on #tbl(Value) WITH FILLFACTOR = 100
/*

Always do this after the insert, since it's faster to add the index in bulk than to update the index as you write into the temp table. Since you know the data in this column, you can set the fill factor to 100% to get the best read times.

*/

while @iRwCnt > 0
begin
select top 1 @sValue = Value from #tbl
set @iRwCnt = @@ROWCOUNT
--ensure that we still have data

if @iRwCnt > 0
begin

--begin tran
print 'My Value is ' + @sValue --replace with your operations on this value
--commit tran

delete from #tbl where value = @sValue
--remove processed record
end
end

drop table #tbl

Method 3: Selecting a comma-delimited list of items

When most developers/DBAs are asked to come up with a list of comma-delimited values from a table, they typically use a cursor or temp table (as above) to loop through the records. However, if you do not need to use a GROUP BY or an ORDER BY, then you can use the method below that operates in batch to handle the task. This cannot be used with GROUP BY DISTINCT, or ORDER BY, because of how SQL Server handles those operations.

Basically, this takes a given variable, and for every row in the table it adds the current value to the variable along with a comma.

declare @vrs varchar(4000)
declare @sTbl sysname
set @sTbl = 'TableName'
set @vrs = ''
select @vrs = @vrs + ', ' + name from syscolumns where id = (select st.id from sysobjects as st where name = @sTbl) order by colorder
set @vrs = right(@vrs, len(@vrs)-2)
print @vrs

This article gives you some good reasons why cursors in SQL Server should be avoided as well as some alternatives that give you looping functionality. Keep in mind that SQL Server is designed around batch processing, so the less you loop, the faster your system will run.

Tuesday, November 11, 2008

Default trace - A Beginner's Guide

This artical can be found at http://www.sqlservercentral.com/articles/SQL+Server+2005/64547/ - just put here as a reference

Default trace - A Beginner's Guide

By Adam Haines, 2008/11/11

Total article views: 4452 | Views in the last 30 days: 4452

We have all been subject to or know someone who has been in a situation where an object has been altered/created/deleted, without our knowledge, and the application comes to a screeching halt. After fixing the problem, your boss asks you some questions, like what happened, why did it happen, and who did it. SQL Server 2005 introduced a new type of trigger called a DDL trigger that can provide all the answers we need; however, you did not get a chance to implement this functionality. So... what do you do?

Some would tell their boss "I do not know, but I can find out" and then search franticly for 3rd party tools to read the transaction log, hoping for instantaneous salvation. What these few do not know is an answer is silently running in the background. SQL Server 2005 has built in functionality that gives administrators the answers to all these questions.

The answers lie in a new background trace called the default trace. The default trace is exactly what the name specifies, a trace. Default trace is always running in the background of your instance capturing events that administrators can use to troubleshoot problems. The default trace is enabled by default and does not burden the system because it is fairly lightweight. Chances are you had not even noticed this trace running on your instance. To those concerned about overhead, yes there is overhead, but in my mind the benefits far outweigh the minimal overhead. The default trace is not intended to replace DDL trigger functionality and should be used as a means to monitor an SQL Instance, or quickly obtain detailed information about problematic events.

The default trace does not capture all trace events, but captures enough information to become a powerful tool in your toolkit. The default trace captures key information including auditing events, database events, error events, full text events, object creation, object deletion and object alteration. From my experiences and observations on forums, I will be focusing on object level events. It seems that a greater number of people want the "who done it" answer for object DDL events.

The first piece of code is to check the default trace to see if it is enabled.

SELECT * FROM sys.configurations WHERE configuration_id = 1568

If this feature is not available, you will have to configure the advanced option "default trace enabled". Below is the code to enable the trace. Note: you will need the ALTER SETTNGS permission or be in the sysadmin or serveradmin fixed server role to reconfigure.

sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'default trace enabled', 1;
GO
RECONFIGURE;
GO

The next piece of information we need is the default trace file path, and the function below will return the current trace file. You can grab the initial trace file (log.trc) and rollup every trace file into a single table, but there is a higher overhead associated to bringing more data in. You should use the trace file that best represents the information you are looking for.

Note: the path is defaulted to the \MSSQL\LOG directory, but we can use the function below to get the path

--get the current trace rollover file
SELECT * FROM ::fn_trace_getinfo(0)

Now that we have all the information we need we can get into the trace data. Let's start by creating a new database call TraceDB.

USE [master]
GO
CREATE DATABASE TraceDB

Now open the trace file, as shown below. As you can see, we were able to gather some pretty significant information about who created the database and when the database was created. I have used category id of 5 and a trace_event_id of 46 to filter the data correctly. Event ID 46 represents Object:Created and category 5 is objects. I will provide queries that list all events and categories at the end of this article.

** Make sure to use your trace file path below. Yours may be different than mine.

SELECT
loginname,
loginsid,
spid,
hostname,
applicationname,
servername,
databasename,
objectName,
e.category_id,
cat.name as [CategoryName],
textdata,
starttime,
eventclass,
eventsubclass,--0=begin,1=commit
e.name as EventName
FROM ::fn_trace_gettable('C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\log.trc',0)
INNER JOIN sys.trace_events e
ON eventclass = trace_event_id
INNER JOIN sys.trace_categories AS cat
ON e.category_id = cat.category_id
WHERE databasename = 'TraceDB' AND
objectname IS NULL AND --filter by objectname
e.category_id = 5 AND --category 5 is objects
e.trace_event_id = 46
--trace_event_id: 46=Create Obj,47=Drop Obj,164=Alter Obj
  • You will see more than one entry per object create because these objects have two event sub classes -begin and commit. Each subclass will have an entry.
  • You can remove the databasename filter to get object creation events for all databases.

Results (Trimmed for Simplicity):


Now, we have seen what default trace is capable of. Let's create another object and repeat the query. This time around we are going to create a table called "MyTable". Use the following code to create the table.

USE [TraceDB]
GO
CREATE TABLE [dbo].[MyTable](
[id] [int] IDENTITY(1,1) NOT NULL,
[sometext] [char](3) NULL
) ON [PRIMARY]

Now query the default trace using the same query as above. Note you can use the ObjectName column to filter for the specific object you are looking for; otherwise all created database objects are returned.

WHERE databasename = 'TraceDB' AND
objectname = 'MyTable' AND --filter by objectname
e.category_id = 5 AND --category 5 is objects
e.trace_event_id = 46
--trace_event_id: 46=Create Obj,47=Drop Obj,164=Alter Obj

Results (Trimmed for Simplicity):


Let's take the demo a step further by altering MyTable. Issue an alter table statement and add a new column to MyTable, as shown below.

USE [TraceDB]
GO
ALTER TABLE MyTable
ADD col INT

We can now search trace information on the alter event for MyTable. We can use the same query as before but need to make a small modification. You must change the trace_event_id to 164 because event 164 represents the object:Altered event.

WHERE databasename = 'TraceDB' AND
objectname = 'MyTable' AND --filter by objectname
e.category_id = 5 AND --category 5 is objects
e.trace_event_id = 164
--trace_event_id: 46=Create Obj,47=Drop Obj,164=Alter Obj

Results (Trimmed for Simplicity):


Now lets drop MyTable and view the trace details. You must change the trace_event_id to 47 because event 47 represents the object:Deleted event, as shown below.

USE [TraceDB]
GO

DROP TABLE MyTable

We can view trace data by changing the trace_event_id to 47.

WHERE databasename = 'TraceDB' AND
objectname = 'MyTable' AND --filter by objectname
e.category_id = 5 AND --category 5 is objects
e.trace_event_id = 47
--trace_event_id: 46=Create Obj,47=Drop Obj,164=Alter Obj

Results (Trimmed for Simplicity):


As you can see, default trace gives an administrator the ability to find the history of any DDL transaction. I want to point out that default trace is not limited to object DDL history. Among other things, default trace captures log growth events, which can be invaluable to troubleshooting disk capacity problems.

For example, say your log file spontaneous grows enormous. It is important to understand why the log grew spontaneously. No one would argue that one of the first place to look may be SQL Jobs. There are many commands within a job that can potentially cause the log to grow enormous, like reindexing, bulk inserts, bulk deletes etc. By using the trace data you can more easily identify the problem because you can pin-point the exact time the log file began to grow. This greatly reduces the number of possible suspects, which reduces the amount of time required to find the culprit.

The query below will pull all trace data using the log auto growth event. Note: You will not have any log growth for TraceDb because we have not done in large inserts to make the log grow. You should apply this query to another database where you want to monitor log growth.


SELECT
loginname,
loginsid,
spid,
hostname,
applicationname,
servername,
databasename,
objectName,
e.category_id,
cat.name,
textdata,
starttime,
endtime,
duration,
eventclass,
eventsubclass,
e.name as EventName
FROM ::fn_trace_gettable('C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\log.trc',0)
INNER JOIN sys.trace_events e
ON eventclass = trace_event_id
INNER JOIN sys.trace_categories AS cat
ON e.category_id = cat.category_id
WHERE databasename = 'TraceDB' AND
e.category_id = 2 AND --category 2 is database
e.trace_event_id = 93 --93=Log File Auto Grow

Summary:

The default trace is a valuable tool for the modern DBA's tool belt. It offers a wealth of information, while minimally impacting the system. The default trace is not a widely publicized feature of SQL Server 2005, but is slowly gaining fame. The default trace gives administrators the ability to get detailed information about auditing events, database events, error events, full text events, object creation, object deletion and object alteration events. With this much information at their fingertips, administrators are more productive and can more easily identify problems in a production environment. My recommendations are to look through the events and see what information already exists for your instances. Default trace should not only be used reactively but proactively. A proactive mentality will reveal small problems before they escalate to bigger problems.

Event and Category Queries

--list of events
SELECT *
FROM sys.trace_events
--list of categories
SELECT *
FROM sys.trace_categories
--list of subclass values
SELECT *
FROM sys.trace_subclass_values
--Get trace Event Columns
SELECT
t.EventID,
t.ColumnID,
e.name AS Event_Descr,
c.name AS Column_Descr
FROM ::fn_trace_geteventinfo(1) t
INNER JOIN sys.trace_events e
ON t.eventID = e.trace_event_id
INNER JOIN sys.trace_columns c
ON t.columnid = c.trace_column_id

References:

List of available events:
http://blogs.technet.com/vipulshah/archive/2007/04/16/default-trace-in-sql-server-2005.aspx
How to enable default trace:
http://msdn.microsoft.com/en-us/library/ms175513(SQL.90).aspx

By Adam Haines, 2008/11/11

Thursday, November 6, 2008

Reporting Services Adding VB code / Getting user login whilst still allowing subscriptions

The User!UserID global allows you to determine the active directory user who is running your report. Very handy for customising and security. A big problem is that subscriptions will fail because report server cannot idetify the user from the subscription.

Here is a work around using a custom code function.

If you have really complex tasks - or ones like our problem - then VB code can be embedded in your report server report.


  1. In Design view, right-click the design surface outside the border of the report and click Report Properties.
  2. Click Code.
  3. In Custom code, type the code.
  4. Public Function UserName()
    Try
    Return Report.User!UserID
    Catch
    Return "System"
    End Try
    End Function

Now we need to make the user name available as a parameter in the report so we can use it in our underlying SQL.


  1. Click on the Report Menu -> Parameters
  2. Add a new parameter
  3. In the default values use a non queried value of =Code.UserName() to call the custom code we entered above.
  4. This particular parameter is needed to determine the list of Market parameters used in this report. It needs to be moved (using up and down arrows) above any parameters dependant on it - in this case Mkt.

The @UserName parameter is now available for us to build our SQL in the data section of the report.

Tuesday, November 4, 2008

VB Script to Kill and Restart a running process


' VBScript to kill a running program, pause for iDelay seconds then restart
' Demonstrates working with winmgmts, timer and shell
' On NT4 boxes this will require Windows Management Instrumentation to be installed
' http://www.microsoft.com/downloads/details.aspx?FamilyID=c174cfb1-ef67-471d-9277-4c2b1014a31e&displaylang=en
' ------------------------ -------------------------------'
Option Explicit
Dim oWMIService, oProcess, colProcess, oShell
Dim intErrNum
Dim strComputer, strProcessKill, strProcessFullPath, iDelay

'set Machine/Process variables here
strComputer = "Machine_Name"
strProcessKill = "wordpad.exe"
strProcessFullPath = """C:\Program Files\Windows NT\Accessories\wordpad.exe"""'Path has

spaces - needs to be sent as qutoed


'check for WMI - use WMI to connect to the registry
On Error Resume Next
Dim oReg : Set oReg = GetObject("winmgmts:root\default:StdRegProv")
intErrNum = Err.Number
On Error Goto 0
Err.Clear

'WMI connection error
If intErrNum <> 0 Then
MsgBox "This script requires " & Chr(34) & "WMI" & Chr(34) &_
", Windows Management Instrumentation, to run.", vbOKCancel + vbExclamation,"WMI Not

Installed!"
WScript.Quit
End If 'WMI execution error


iDelay = 30'30 second delay before restart'

'locate and kill the process
Set oWMIService = GetObject("winmgmts:" _
& "{impersonationLevel=impersonate}!\\" _
& strComputer & "\root\cimv2")

Set colProcess = oWMIService.ExecQuery ("Select * from Win32_Process" )

For Each oProcess in colProcess
if lcase(oProcess.Name) = lcase(strProcessKill) then
oProcess.Terminate()
'WSCript.Echo "Just killed process " & strProcessKill & " on " & strComputer
end if
Next

'pause for set delay
'WSCript.Echo "Pausing"
WScript.Sleep(iDelay*1000)
'WSCript.Echo "Restarting"

'run program
Set oShell = WScript.CreateObject("WScript.Shell")
oShell.run(strProcessFullPath)


Set oShell = Nothing
Set colProcess = Nothing
Set oWMIService = Nothing

WScript.Quit



Monday, November 3, 2008

OS - Getting back server disk space

Disk drives filling up - here are some things to look for

Unaccounted couple of gig missing?

1.
This can be your pagefile.sys (virtual memory) You can move it to another disk or - for a performance increase - split it across multiple disks (if they are on separate spindles) - there is a good article on this at http://members.shaw.ca/bsanders/WindowsGeneralWeb/RAMVirtualMemoryPageFileEtc.htm
  • In Explorer go to Tools -> Options -> View and make sure "Hide protected OS files" is unchecked and "Show Hidden files" is selected
  • Look in your c:\ directory and you can see the system files
  • Go to My Computer -> Properties -> Advanced Tab ->Performance options -> Virtual Memory
  • Change the size and location of the virtual memory
2. Your disk may be fragmented or have errors. There are tools to correct this. Go to explorer -> right click -> Tools tab

Log Files
Windows log files are located C:\WINNT\system32\LogFiles. Clear out all but the last month
Publish Post
Properties -> Tools tab

Thursday, October 23, 2008

Finding who is connected to a database and disconnecting them

--list the connections - run against master
select spid, status, loginame, hostname, blocked, db_name(dbid), cmd from master..sysprocesses where db_name(dbid) = 'database name'

--disconnect the user
kill 999 --where 999 is the spid from the above statement

Alternative
--list all users in a database - run against the db you are looking into
sp_who

--disconnect the user
kill vid

Friday, October 3, 2008

Importing Active Directory Details to SQL Server

The following script can be used to import Active Directory data into a Sql table.

Note - there is a big limitation with using this - only the first 1000 entries will be selected

You need to initially create the Active Directory database as a linked server.

The script uses subquerys to allow for manipulation in the SELECT Field1,Field2,Field3,Field4 FROM statement and you will have to put in your own AD specific details for the parameters and fields.

Apologies for the code formating problems I had with this - remove the underscores

-- to link AD Server
-- sp_addlinkedserver 'ADSI', 'Active Directory Service Interfaces', 'ADSDSOObject', 'adsdatasource'
BEGIN TRANSACTION
TRUNCATE TABLE AdUserDetails
INSERT INTO AdUserDetails (Field1, Field2, Field3, Field4)
-- Select the required fields from Active Directory
SELECT Field1,Field2,Field3,Field4 FROM
(SELECT TOP 100 PERCENT *
FROM OPENQUERY(ADSI,
'<LDAP:/ /dc=YourADParam1,dc=YourADParam2,dc=YourADParam3>;(&(objectCategory=person)(objectClass=user)(name=*));AdField1,AdField2,AdField3,AdField4;subtree') AS t
WHERE AdField3 IS NOT NULL) AS ad
COMMIT TRANSACTION
select * from AdUserDetails

Thursday, October 2, 2008

Reporting Services - Setting up Drill Downs

Reporting Services - Conditional Formating

Reporting Services - Ordering Matrix Columns

Column header values from a field you are grouping on will be semi randomly in a matrix report. displayed if specified as follows.

1) Right click on the field to be ordered and open "Edit Group".

dffds
2)
dsfdsfdas fdasf dasf asd fdas fadsaf asdf dasf das fdas fads fda fdsa fdasf asd fasd fasd fha iof oasdjfio;asdjio;f asdiofuioaufiopduaiopfduiopa fuiopdasuiopfasduiopfasdu fuasdiop fuiopasdu fiopasdu fiopasdufi uasdiop fuai





dfsfsdfs

fsdfsdfs

sdfsdfsdf














Tuesday, September 30, 2008

List objects in a SQL database

This base script for querying SQL Servers system tables is the sharpest tool in the shed.
I mainly use it as the best way to search for occurance of fields in tables but can be modified in a multitude of ways. The sysobjects.xtype field is the key and I have listed its values below
pt.sql
-- C : CHECK constraint
-- D : Default or DEFAULT constraint
-- F : FOREIGN KEY constraint
-- L : Log
-- P : Stored procedure
-- PK : PRIMARY KEY constraint (type is K)
-- RF : Replication filter stored procedure
-- S : System tables
-- TR : Triggers
-- U : User table
-- UQ : UNIQUE constraint (type is K)
-- V : Views
-- X : Extended stored procedure
-- TF : Functions
USE master -- only an example - run local to the database you are searching
SELECT
o.name AS TableName,
o.type AS ObjectType,
c.name AS FieldName,
s.name AS FieldType,
c.prec,
c.scale,
c.isnullable
FROM
sysobjects AS o
INNER JOIN syscolumns AS c
ON c.id = o.id
INNER JOIN systypes AS s
ON c.xtype = s.xtype
WHERE
--select all user tables
o.type = 'U'
--other example select all user tables containing the field SalesPersonId
--o.type = 'U' and c.name = 'SalesPersonId'

Thursday, September 25, 2008

Reporting Services - Exporting reports

I keep getting asked how to get a report off a report server. It is not very intuitive but here is a comppilition various email I have had to write.

Exporting the report
This describes how to export report server reports from one server to another and/or how to export the source code of existing reports
1. Go to the folder with the existing report you wish to export and click on the “Show Details” button.
2. Click on the Edit Properties for the report
3. Click on the Edit link under the Report Definition sub heading
4. Choose the Save option – this will export the report in rdl format

Import Directly of Extract and Build the code
At this point you can go to Reporting Services 2005 and import the rdl file directly or else you can rebuild the report in Visual Studio so that you will have access to the source for modifications.

Import Directly
1. Open SQL Server 2005 Management Studio
2. Choose Reporting Services as the server type and log onto the server you want to upload the report to
3. Move to the directory to import the file to
4. Right click -> Import file – choose the .rdl file exported previously
5. Set up appropriate data sources and permissions using the Report Manager Web interface
Extract and Build the Code
1. Open either:
a. SQL Server Business Intelligence Dev Studio (Programs->Microsoft SQL Server 2005 ->)
b. MS Visual Studio 2005 (Programs-> MS Visual Studio 2005 ->)
2. File->New->Project->Report Server Project
3. Name the project the same as the original

Tuesday, September 23, 2008

Tip Using SQL Profiler - run as log then load into a table

When running a trace on an Profiler it is more convenient to manipulate the data in a table but you incur lower overheads saving to a file.

The following tip is from the microsoft article http://msdn.microsoft.com/en-us/library/ms979207.aspx?ppud=4

Run the trace as a file then import into a table
SELECT * INTO trace1108 FROM ::fn_trace_gettable('C:\MyTrace.trc', default)

Setting filters in SQL Profiler is a pain in the bottom - expect to get it wrong and have to fiddle with your settings. Often it is better just to run what you can, load it into a table and run a few deletes to get rid of records that are known not to be needed

Thursday, September 18, 2008

Script to Delete Reporting Services Subscriptions

I wrote this script to delete all subscriptions against a reporting services report.

SQL 2000 offers fairly poor management tools for Reporting Services and I had a situation where automatically generated subscriptions were piling up on a 2000 box to the point where they could not be deleted manually.

To run - create a console program (note - console program) in Visual Studio and put the following code in a module. You will probably need to set up a few dependencies in the references.

I don't know if anyone will have as specific requirements as this but it might be a help if you are looking at programming against the reporting services object model.

Imports System
Imports System.Web.Services.Protocols
Imports RSManagement_Unsubscribe.ReportingServices

'Gerard Conlon 18 Sept 2008
'Program to delete all report subscriptions against a given ReportingServices report
'Written for compilition as a console application in Visual Studio


Module modRsManagement

Sub Main()
Dim strReportOwner As String
Dim strReportPath As String

Dim strParams() As String
Dim i As Integer

'the parameters are passed in the form of RSManagement_Unsubscribe.exe "ReportPath|ReportOwner
'they are wrapped in double quotes and seperated by the pipe |
'ie called from the command line as
' RSManagement_Unsubscribe.exe "/MyReportsDir/ReportName|MyServer\ASPNET"
'ReportPath - full report path from the top level directory and including the report name
'ReportOwner - usually the name of the server hosting reporting services and running under the ASPNET user ie MyServer\ASPNET"

'Get the passed parameters
If Len(Command()) > 0 Then
strParams = Split(Mid$(Command(), 2, Len(Command()) - 2), "|")
For i = 0 To UBound(strParams)
If i = 0 Then strReportPath = strParams(0)
If i = 1 Then strReportOwner = strParams(1)
Next i
End If


'check to see if parameters are passed and if not give descriptive error
If Trim(strReportPath) = "" Or Trim(strReportOwner) = "" Then
Console.WriteLine("")
Console.WriteLine("The Report Path and The Report owner must be specifed as parameters")
Console.WriteLine("")
Console.WriteLine("Parameters are seperated by the pipe char (|) and need to be in double quotes")
Console.WriteLine("")
Console.WriteLine("Example: RSManagement_Unsubscribe.exe " + """" + "/PrePostTimes/CMS Pretimes|AUS-DB1\ASPNET" + """")
Console.WriteLine("")
Console.WriteLine("Press any key to continue.......")
Console.ReadLine() 'hold the screen open so message can be read

Else
Console.WriteLine(fDeleteSubscriptions(strReportPath, strReportOwner))

End If

End Sub


Function fDeleteSubscriptions(ByVal sReport As String, ByVal sOwner As String) As String
Dim rs As New ReportingService
Dim subscriptions As Subscription() = Nothing
Dim schedules As Schedule() = Nothing
Dim i As Integer
Dim cnt As Integer

Console.WriteLine(" Searching for Subscriptions ..........")
rs.Credentials = System.Net.CredentialCache.DefaultCredentials
Console.WriteLine(sReport)
Console.WriteLine(sOwner)
rs.Timeout = 600000
Try
fDeleteSubscriptions = "0"
cnt = 0
Console.WriteLine("Deleting Subscription ..........")
subscriptions = rs.ListSubscriptions(sReport, sOwner)
'loop through and delete the subscriptions
If subscriptions.GetLength(0) > 0 Then
For i = 0 To subscriptions.GetLength(0) - 1
'rs.BeginListSchedules()
Console.WriteLine("Deleting " + subscriptions(i).Description)
rs.DeleteSubscription(subscriptions(i).SubscriptionID)

cnt = cnt + 1

Next
fDeleteSubscriptions = "Deleted " + CStr(cnt) + " Subscriptions"
Else
fDeleteSubscriptions = "Nil (0) Subscriptions"
End If

Catch e As SoapException
fDeleteSubscriptions = fDeleteSubscriptions + (e.Detail.OuterXml)
End Try
End Function
End Module




Sunday, July 27, 2008

SLQ 2000 Script to Maintain Indexes

There seem to be a lot of problems with the maintenance of indexes in SQL 2000 using the inbuilt Database Maintenance Plans.

The following script creates 2 tables and a stored procedure that can be run as a periodic job. It will defrag any indexes > 30% fragmentation in its default state.

Just be wary if you are log shipping though. Index work of any kind can blow out your log file size.

To install replace "Database Name Here" with your database name - open query analyzer and run the script.

The handling of the indexes can be controlled via the maint_index table. Be aware that the stored procedure processes the indexes in this table then repopulates it with the current state of the database. It is designed this way for a set and forget operation that will also allow you to make modifications to the indexes or force a rebuild by changing values in the maint_index table. This means in practice that if you are running it once per week then the indexes will be rebuilt bases on their state the previous week when the stored procedure was last run.
Fragmenting indexes will bubble up to the top and get processed but the maximum they will be processed is every second run.

Script Variables that can be set
  • @db_name - self explanitory - this needs to be set
  • @maxfrag - changes the fragmentation level at which an index is
  • @maxminutes - This is a safety setting to limit how long the defrag process will run. If the script is still defraging after this period of time it will finish the current index and exit the loop
Database Fields that can be changed
You can go to the maint_index table and change operation performed on individual indexes the next time the script is run.
  • Force_Defrag_REINDEX By default the script runs a defrag on the index (0). If you set this to 1 then a full reindex will be performed (but be aware that the table might be offline during this operation)
  • SetFillFactor This will change the fill factor of the index. the default is its existing value.
I have had great success with this script improving "Doggy" servers performance.

Enjoy

Gerard


CREATE TABLE [maint_index] (
[Id] [int] IDENTITY (1, 1) NOT NULL ,
[ObjectName] [char] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[IndexName] [char] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[LogicalFrag] [decimal](18, 0) NULL ,
[Force_Defrag_REINDEX_1or2] [int] NULL CONSTRAINT [DF_maint_index_Force_Defrag_REINEX] DEFAULT (0),
[SetFillFactor] [int] NULL CONSTRAINT [DF_maint_index_FillFactor] DEFAULT (0),
[ObjectId] [int] NULL ,
[IndexId] [int] NULL ,
[Lvl] [int] NULL ,
[CountPages] [int] NULL ,
[CountRows] [int] NULL ,
[MinRecSize] [int] NULL ,
[MaxRecSize] [int] NULL ,
[AvgRecSize] [int] NULL ,
[ForRecCount] [int] NULL ,
[Extents] [int] NULL ,
[ExtentSwitches] [int] NULL ,
[AvgFreeBytes] [int] NULL ,
[AvgPageDensity] [int] NULL ,
[ScanDensity] [decimal](18, 0) NULL ,
[BestCount] [int] NULL ,
[ActualCount] [int] NULL ,
[ExtentFrag] [decimal](18, 0) NULL ,
[Defrag] [int] NULL CONSTRAINT [DF_maint_index_Defrag] DEFAULT (0),
[Fill_Factor] [int] NULL CONSTRAINT [DF_maint_index_Fill_Factor] DEFAULT (0)
) ON [PRIMARY]
GO

CREATE PROCEDURE [dbo].[maint_index_process] AS

SET NOCOUNT ON

--maint_index

SET NOCOUNT ON

DECLARE @db_name VARCHAR (128)

DECLARE @tablename VARCHAR (128)

DECLARE @execstr VARCHAR (255)

DECLARE @indexname VARCHAR (255)

DECLARE @objectid INT

DECLARE @indexid INT

DECLARE @frag DECIMAL

DECLARE @reindex INT

DECLARE @fill INT

DECLARE @maxfrag DECIMAL

DECLARE @maxminutes INT

DECLARE @maxruntime DATETIME

DECLARE @TempId int

--specify for each database it is run on

SELECT @db_name = 'CMS'

--maximum fragmentation to allow

SELECT @maxfrag = 30.0

--maximum time for the process to run, it will exit the loop if this time is exceeded

--Note1 if reindex run over they will not be stopped

SELECT @maxminutes = 90--1.5 hours

select @maxruntime = DATEADD(minute,@maxminutes,GETDATE()) --2 hours max run

--process reindexes, marked defrags, defrags over @maxfrag

DECLARE indexes CURSOR FOR

SELECT ObjectName, ObjectId, IndexName, IndexId, LogicalFrag, Force_Defrag_REINDEX_1or2, SetFillFactor

FROM maint_index

WHERE indexid >0

AND IndexId <> 255 --this indicates image data and cannot be defragged

AND (LogicalFrag>=@maxfrag

OR Force_Defrag_REINDEX_1or2 > 0)

ORDER BY Force_Defrag_REINDEX_1or2 DESC --do the reindexing first, indexes specifed to be defragmented then general fragmented over @maxfrag

-- Open the cursor

OPEN indexes

-- loop through the indexes

FETCH NEXT

FROM indexes

INTO @tablename,@objectid, @indexname, @indexid, @frag, @reindex, @fill

WHILE (@@FETCH_STATUS = 0) AND (GETDATE()< @maxruntime)--while we still have records and we havn't exceeded out times

BEGIN

IF @reindex = 2

BEGIN

IF @fill<1

BEGIN

SELECT @execstr = '])'--go with the default fill factor

SELECT @fill = -1 --indicate in history table that existing fill was used

END

ELSE

BEGIN

SELECT @execstr = '],' + CAST(@fill AS VARCHAR) + ')'

END

--create the command and insert initial history record

SELECT @execstr = 'DBCC DBREINDEX (''[' + @db_name + '].dbo.[' + RTRIM(@tablename) + ']'',[' + RTRIM(@indexname) + @execstr

INSERT INTO maint_index_history (maint_Database,maint_Table,maint_Index,StartTime,RequestedAction,FillUsed,CommandString)

VALUES (@db_name,RTRIM(@tablename),RTRIM(@indexname),GETDATE(),@reindex,@fill,@execstr)

SELECT @TempId=@@identity

--Do a reindex

EXEC (@execstr)

--update the history record

UPDATE maint_index_history SET EndTime= GETDATE() WHERE Id = @TempId

SELECT @TempId=0

END

ELSE

BEGIN

--create the command and insert initial history record

SELECT @execstr = 'DBCC INDEXDEFRAG ([' + @db_name + '], [' + RTRIM(@tablename) + '],[' + RTRIM(@indexname) + '])'

INSERT INTO maint_index_history (maint_Database,maint_Table,maint_Index,StartTime,RequestedAction,CommandString)

VALUES (@db_name,RTRIM(@tablename),RTRIM(@indexname),GETDATE(),@reindex,@execstr)

SELECT @TempId=@@identity

--Do a Defrag

EXEC (@execstr)

--update the history record

UPDATE maint_index_history SET EndTime= GETDATE() WHERE Id = @TempId

SELECT @TempId=0

END

FETCH NEXT FROM indexes

INTO @tablename,@objectid, @indexname, @indexid, @frag, @reindex, @fill

END

-- Close and deallocate the cursor

CLOSE indexes

DEALLOCATE indexes

--create temporary table

-- Create the table

CREATE TABLE #fraglist (

ObjectName CHAR (255),

ObjectId INT,

IndexName CHAR (255),

IndexId INT,

Lvl INT,

CountPages INT,

CountRows INT,

MinRecSize INT,

MaxRecSize INT,

AvgRecSize INT,

ForRecCount INT,

Extents INT,

ExtentSwitches INT,

AvgFreeBytes INT,

AvgPageDensity INT,

ScanDensity DECIMAL,

BestCount INT,

ActualCount INT,

LogicalFrag DECIMAL,

ExtentFrag DECIMAL)

-- Declare cursor

DECLARE tables CURSOR FOR

SELECT TABLE_NAME

FROM INFORMATION_SCHEMA.TABLES

WHERE TABLE_TYPE = 'BASE TABLE'

-- Open the cursor

OPEN tables

-- Loop through all the tables in the database

FETCH NEXT

FROM tables

INTO @tablename

WHILE @@FETCH_STATUS = 0

BEGIN

-- Do the showcontig of all indexes of the table

INSERT INTO #fraglist

EXEC ('DBCC SHOWCONTIG (''' + @tablename + ''')

WITH FAST, TABLERESULTS, ALL_INDEXES, NO_INFOMSGS')

FETCH NEXT

FROM tables

INTO @tablename

END

-- Close and deallocate the cursor

CLOSE tables

DEALLOCATE tables

--insert the temporary table into the maint_index then drop it

TRUNCATE TABLE maint_index--clear out existing data

INSERT INTO maint_index

(ObjectName,IndexName,LogicalFrag,ObjectId,IndexId,Lvl,CountPages,CountRows,MinRecSize,MaxRecSize,AvgRecSize,ForRecCount,Extents,ExtentSwitches,AvgFreeBytes,AvgPageDensity,ScanDensity,BestCount,ActualCount,ExtentFrag )

select

ObjectName,IndexName,LogicalFrag,ObjectId,IndexId,Lvl,CountPages,CountRows,MinRecSize,MaxRecSize,AvgRecSize,ForRecCount,Extents,ExtentSwitches,AvgFreeBytes,AvgPageDensity,ScanDensity,BestCount,ActualCount,ExtentFrag

from #fraglist

where indexid >0

DROP TABLE #fraglist