NuGet pack with include referenced projects

Yesterday I wanted to build a NuGet package with a dll that references a project in the same solution. The referenced project however was not added to the installation package when I packed a new Nuget package.

I searched the internet and found some remarks about adding a parameter includereferencedprojects.

Then I came to my pack command in Azure and saw that the parameter was not in the Nuget pack task.

The YAML i had:

At first I didn’t know how to add the includereferencedprojects parameter. I then changed the “automatic package versioning” in “pack options”. I set it to “Off” and an extra checkbox was presented:

I checked the box:

And set the versioning back to the previous value.

Now my YAML included the includereferencedprojects parameter.

After I started a new build with this pipeline, the referenced project dll’s were added to my package.

Registering / Installing a Windows Service

After you have written your Windows Service in Visual Studio you might want to run this to test and use the service.

You can run your service with a few simple steps: I used a service called ServiceName in the following examples.

First start a command prompt: cmd (as admin)

Install a service is done using sc create, syntax:
sc create ServiceName binPath=”pathto.exe”. If the command was successfull it will report SUCCESS.
Example:

Install a service is done using sc create, syntax:
sc create ServiceName binPath=”pathto.exe”.

If the command was successfull it will report SUCCESS. Example:

C:\WINDOWS\system32>sc create ServiceName binPath=”C:\repos\ServiceMonitor\ServiceName\bin\Debug\ServiceName.exe”
[SC] CreateService SUCCESS

To start the service use the command net start, syntax:
net start ServiceName

C:\WINDOWS\system32>net start ServiceName
The ServiceName service is starting.
The ServiceName service was started successfully.

To stop the service use the command net stop, syntax:
net stop ServiceName
Example:

C:\WINDOWS\system32>net stop ServiceName
The ServiceName service is stopping.
The ServiceName service was stopped successfully.

To delete / uninstall the service use sc delete, syntax:
sc delete ServiceName
Example:

C:\WINDOWS\system32>sc delete servicename
[SC] DeleteService SUCCESS

UPDATE 28-2-2019: New post, I added a installer in the executable. This enables the service to install running the executable. Read on in this follow up article: http://kannekens.nl/registering-installing-a-windows-service-part-2/

Git Branching – Branches in a Nutshell

https://git-scm.com/book/en/v2/Git-Branching-Branches-in-a-Nutshell

3.1 Git Branching – Branches in a Nutshell

Nearly every VCS has some form of branching support. Branching means you diverge from the main line of development and continue to do work without messing with that main line. In many VCS tools, this is a somewhat expensive process, often requiring you to create a new copy of your source code directory, which can take a long time for large projects.

Some people refer to Git’s branching model as its “killer feature,” and it certainly sets Git apart in the VCS community. Why is it so special? The way Git branches is incredibly lightweight, making branching operations nearly instantaneous, and switching back and forth between branches generally just as fast. Unlike many other VCSs, Git encourages workflows that branch and merge often, even multiple times in a day. Understanding and mastering this feature gives you a powerful and unique tool and can entirely change the way that you develop.

Branches in a Nutshell

To really understand the way Git does branching, we need to take a step back and examine how Git stores its data.

As you may remember from Getting Started, Git doesn’t store data as a series of changesets or differences, but instead as a series of snapshots.

When you make a commit, Git stores a commit object that contains a pointer to the snapshot of the content you staged. This object also contains the author’s name and email address, the message that you typed, and pointers to the commit or commits that directly came before this commit (its parent or parents): zero parents for the initial commit, one parent for a normal commit, and multiple parents for a commit that results from a merge of two or more branches.

To visualize this, let’s assume that you have a directory containing three files, and you stage them all and commit. Staging the files computes a checksum for each one (the SHA-1 hash we mentioned in Getting Started), stores that version of the file in the Git repository (Git refers to them as blobs), and adds that checksum to the staging area:

$ git add README test.rb LICENSE

$ git commit -m ‘The initial commit of my project’

When you create the commit by running git commit, Git checksums each subdirectory (in this case, just the root project directory) and stores those tree objects in the Git repository. Git then creates a commit object that has the metadata and a pointer to the root project tree so it can re-create that snapshot when needed.

Your Git repository now contains five objects: three blobs (each representing the contents of one of the three files), one tree that lists the contents of the directory and specifies which file names are stored as which blobs, and one commit with the pointer to that root tree and all the commit metadata.

Figure 9. A commit and its tree

If you make some changes and commit again, the next commit stores a pointer to the commit that came immediately before it.

Figure 10. Commits and their parents

A branch in Git is simply a lightweight movable pointer to one of these commits. The default branch name in Git is master. As you start making commits, you’re given a master branch that points to the last commit you made. Every time you commit, the master branch pointer moves forward automatically.

Note The “master” branch in Git is not a special branch. It is exactly like any other branch. The only reason nearly every repository has one is that the git init command creates it by default and most people don’t bother to change it.

Figure 11. A branch and its commit history

Creating a New Branch

What happens when you create a new branch? Well, doing so creates a new pointer for you to move around. Let’s say you want to create a new branch called testing. You do this with the git branch command:

$ git branch testing

This creates a new pointer to the same commit you’re currently on.

Figure 12. Two branches pointing into the same series of commits

How does Git know what branch you’re currently on? It keeps a special pointer called HEAD. Note that this is a lot different than the concept of HEAD in other VCSs you may be used to, such as Subversion or CVS. In Git, this is a pointer to the local branch you’re currently on. In this case, you’re still on master. The git branch command only created a new branch — it didn’t switch to that branch.

Figure 13. HEAD pointing to a branch

You can easily see this by running a simple git log command that shows you where the branch pointers are pointing. This option is called –decorate.

$ git log –oneline –decorate

f30ab (HEAD -> master, testing) add feature #32 – ability to add new formats to the central interface

34ac2 Fixed bug #1328 – stack overflow under certain conditions

98ca9 The initial commit of my project

You can see the “master” and “testing” branches that are right there next to the f30ab commit.

Switching Branches

To switch to an existing branch, you run the git checkout command. Let’s switch to the new testing branch:

$ git checkout testing

This moves HEAD to point to the testing branch.

Figure 14. HEAD points to the current branch

What is the significance of that? Well, let’s do another commit:

$ vim test.rb

$ git commit -a -m ‘made a change’

Figure 15. The HEAD branch moves forward when a commit is made

This is interesting, because now your testing branch has moved forward, but your master branch still points to the commit you were on when you ran git checkout to switch branches. Let’s switch back to the master branch:

$ git checkout master

Figure 16. HEAD moves when you checkout

That command did two things. It moved the HEAD pointer back to point to the master branch, and it reverted the files in your working directory back to the snapshot that master points to. This also means the changes you make from this point forward will diverge from an older version of the project. It essentially rewinds the work you’ve done in your testing branch so you can go in a different direction.

Note Switching branches changes files in your working directory It’s important to note that when you switch branches in Git, files in your working directory will change. If you switch to an older branch, your working directory will be reverted to look like it did the last time you committed on that branch. If Git cannot do it cleanly, it will not let you switch at all.

Let’s make a few changes and commit again:

$ vim test.rb

$ git commit -a -m ‘made other changes’

Now your project history has diverged (see Divergent history). You created and switched to a branch, did some work on it, and then switched back to your main branch and did other work. Both of those changes are isolated in separate branches: you can switch back and forth between the branches and merge them together when you’re ready. And you did all that with simple branch, checkout, and commit commands.

Figure 17. Divergent history

You can also see this easily with the git log command. If you run git log –oneline –decorate –graph –all it will print out the history of your commits, showing where your branch pointers are and how your history has diverged.

$ git log –oneline –decorate –graph –all

* c2b9e (HEAD, master) made other changes

| * 87ab2 (testing) made a change

|/

* f30ab add feature #32 – ability to add new formats to the

* 34ac2 fixed bug #1328 – stack overflow under certain conditions

* 98ca9 initial commit of my project

Because a branch in Git is actually a simple file that contains the 40 character SHA-1 checksum of the commit it points to, branches are cheap to create and destroy. Creating a new branch is as quick and simple as writing 41 bytes to a file (40 characters and a newline).

This is in sharp contrast to the way most older VCS tools branch, which involves copying all of the project’s files into a second directory. This can take several seconds or even minutes, depending on the size of the project, whereas in Git the process is always instantaneous. Also, because we’re recording the parents when we commit, finding a proper merge base for merging is automatically done for us and is generally very easy to do. These features help encourage developers to create and use branches often. R

How to set up a local deployment for an Azure build application

*Important note: This solution will only work when you do NOT have a .gitignore file in your repository*

Configure a local agent

First requirement is that you set up a local agent that will be used for the local tasks.

How to configure local build and deploy agents is explained here:

https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/v2-windows?view=vsts

The result should look somewhat like this:

To control this agent you can choose to install it as a service on Windows.

Or you can choose to run the agent from the command line. To start and stop the agent I added two scripts:

Start.cmd:

Start.cmd:
cd c:
cd \EK-VSTS-Agent
start “EK-VSTS Azure agent” .\run.cmd
exit

Stop.cmd:

taskkill /FI “WindowTitle eq EK-VSTS Azure agent*” /T /F

Setup a build and release pipeline in Azure

Goto Pipelines in your Azure Devops project and click on new pipeline. My example uses a project named WPFDatasetWithSQL.

*Important note: This solution will only work when you do NOT have a .gitignore file in your repository*

Click continue and choose .Net Desktop and click Apply.

If you want to build the solution using a hosted machine keep the “Agent pool” set on “Hosted VS2017”. If you need local components to build you could choose to use a local machine or set up the required components in this build script.

For this example I have no need for extra components and I will keep the Agent pool on Hosted VS2017.

We are going to change a few setps in this script:

1 Set the MSBuild Arguments to /target:publish. This changes the MSBuild to add a app.publish to the build directory for click once deployment.

2 Change the step Copy Files to add the app.publish folder to the artifacts folder.
Display name = Copy Files to: $(build.artifactstagingdirectory)
Source Folder = $(Build.SourcesDirectory)\src\BLM\bin\$(BuildConfiguration)\app.publish
Contents = **\**

3 Change the artifact name.
Display name = Publish Artifact: $(System.TeamProject)-$(Build.BuildNumber)
Artifact name = $(System.TeamProject)-$(Build.BuildNumber)

Click Save and keep the default name.

Set up a release pipeline

Now we will set up a release pipeline in which we can control and manage releases for this application.

Click on Releases in the menu and click New pipeline.

Choose a Empty job template. The release pipeline is going to contain not much more than a few copy tasks.

For starters we will have to choose an artifact. Choice is simple, we are going to use the artifacts from the build pipeline. Select the Source Build pipeline set up in the previous step and finish this by clicking the Add button below.

Next step in this release pipeline is a deployment to “Test”. For this purpose we will rename the default “Stage 1” to “Test”. For this, clicking the Stage1 image (not on the link to job with task) will open a properties window. Rename Stage1 to Test and click save right top in the corner.

Now click the link to job and task in the Test stage. Click the agent job and change the agent pool to the pool where you added the local agent. In my example I added the local agent to a pool named “local machine”.

Now we will add a job to copy the publish folder to a local directory. Click on the puls sign next to “Agent job” and search for “Copy Files”

Select The task added below Job agent and fill in the details:

Select The task added below Job agent and fill in the details:
Display name = Copy Files to: c:\drop\$(System.TeamProject)\$(Release.EnvironmentName)\$(Release.ReleaseName)\
Source Folder = $(system.defaultworkingdirectory)_WPFDatasetWithSQL-.NET Desktop-CI * This last directory name is the build pipeline name
Target Folder = c:\drop\$(System.TeamProject)\$(Release.EnvironmentName)\$(Release.ReleaseName)\

The source folder will contain the pipeline name for the build pipeline preceded by an underscore:

Click save in top right hand corner.

Now we are going to add the production stage and the required copy jobs for this stage.

Click on releases in the left menu and click edit.

Click “Clone” in Test stage. And rename this new stage “Copy of Test” to “Production”. Click the task details and here I added System.TeamProject to the source folder name. This removes the build number from the destination name.

Next click the plus sign for the “Agent job” to add a command line script. With this command line we will first clean the install folder before we copy the new release in that location. The command line script is rd /S /Q c:\drop\$(System.TeamProject)\Install\

Last task for this job is to add a second “Copy Files” task. This task will copy the publish content in the install folder.

For the first run disable the Command line script because the folder will not yet exist. This will cause an error if the command is executed while the directory does not exist. After the first run the command can be enabled.

Last option is to add an approval trigger on production. A test manager or a group of testers can be allowed to approve the release after testing.

Another nice feature is to enable continuous integration and continuous deployment in Azure. For this go to the build pipeline and click the checkbox for “Enable continuous integration” in the tab “Triggers”.

Second, go to release pipeline click the continuous deployment trigger and enable continuous deployment every time a new build is available. Click save.

First two times the deployment failed. I checked the logging and fixed some typing errors.

After approving the release the install folder will be updated with the required binaries.

All done. Enjoy.

Useful SQL Server DBCC Commands

Handige link:
http://www.sql-server-performance.com/tips/dbcc_commands_p1.aspx
if the link should fail here is the content:

verder:
Useful SQL Server DBCC Commands
By : Brad McGehee

DBCC CACHESTATS displays information about the objects currently in the buffer cache, such as hit rates, compiled objects and plans, etc.Example:
DBCC CACHESTATS
Sample Results (abbreviated):
Object Name Hit Ratio
———— ————-
Proc 0.86420054765378507
Prepared 0.99988494930394334
Adhoc 0.93237136647793051
ReplProc 0.0
Trigger 0.99843452831887947
Cursor 0.42319205924058612
Exec Cxt 0.65279111666076906
View 0.95740334726893905
Default 0.60895011346896522
UsrTab 0.94985969576133511
SysTab 0.0
Check 0.67021276595744683
Rule 0.0
Summary 0.80056155581812771
Here’s what some of the key statistics from this command mean:
⦁ Hit Ratio: Displays the percentage of time that this particular object was found in SQL Server’s cache. The bigger this number, the better.
⦁ Object Count: Displays the total number of objects of the specified type that are cached.
⦁ Avg. Cost: A value used by SQL Server that measures how long it takes to compile a plan, along with the amount of memory needed by the plan. This value is used by SQL Server to determine if the plan should be cached or not.
⦁ Avg. Pages: Measures the total number of 8K pages used, on average, for cached objects.
⦁ LW Object Count, LW Avg Cost, WL Avg Stay, LW Ave Use: All these columns indicate how many of the specified objects have been removed from the cache by the Lazy Writer. The lower the figure, the better.
[7.0, 2000] Updated 9-1-2005*****DBCC DROPCLEANBUFFERS: Use this command to remove all the data from SQL Server’s data cache (buffer) between performance tests to ensure fair testing. Keep in mind that this command only removes clean buffers, not dirty buffers. Because of this, before running the DBCC DROPCLEANBUFFERS command, you may first want to run the CHECKPOINT command first. Running CHECKPOINT will write all dirty buffers to disk. And then when you run DBCC DROPCLEANBUFFERS, you can be assured that all data buffers are cleaned out, not just the clean ones.Example:
DBCC DROPCLEANBUFFERS
[7.0, 2000, 2005] Updated 9-1-2005*****DBCC ERRORLOG: If you rarely restart the mssqlserver service, you may find that your server log gets very large and takes a long time to load and view. You can truncate (essentially create a new log) the Current Server log by running DBCC ERRORLOG. You might want to consider scheduling a regular job that runs this command once a week to automatically truncate the server log. As a rule, I do this for all of my SQL Servers on a weekly basis. Also, you can accomplish the same thing using this stored procedure: sp_cycle_errorlog.Example:
DBCC ERRORLOG
[7.0, 2000, 2005] Updated 9-1-2005*****DBCC FLUSHPROCINDB: Used to clear out the stored procedure cache for a specific database on a SQL Server, not the entire SQL Server. The database ID number to be affected must be entered as part of the command.You may want to use this command before testing to ensure that previous stored procedure plans won’t negatively affect testing results.Example:
DECLARE @intDBID INTEGER SET @intDBID = (SELECT dbid FROM master.dbo.sysdatabases WHERE name = ‘database_name’)
DBCC FLUSHPROCINDB (@intDBID)
[7.0, 2000, 2005] Updated 9-1-2005*****DBCC INDEXDEFRAG: In SQL Server 2000, Microsoft introduced DBCC INDEXDEFRAG to help reduce logical disk fragmentation. When this command runs, it reduces fragmentation and does not lock tables, allowing users to access the table when the defragmentation process is running. Unfortunately, this command doesn’t do a great job of logical defragmentation.The only way to truly reduce logical fragmentation is to rebuild your table’s indexes. While this will remove all fragmentation, unfortunately it will lock the table, preventing users from accessing it during this process. This means that you will need to find a time when this will not present a problem to your users.Of course, if you are unable to find a time to reindex your indexes, then running DBCC INDEXDEFRAG is better than doing nothing.Example:
DBCC INDEXDEFRAG (Database_Name, Table_Name, Index_Name)
[2000] Updated 9-1-2005

DBCC FREEPROCCACHE: Used to clear out the stored procedure cache for all SQL Server databases. You may want to use this command before testing to ensure that previous stored procedure plans won’t negatively affect testing results.Example:
DBCC FREEPROCCACHE
[7.0, 2000, 2005] Updated 10-16-2005*****DBCC MEMORYSTATUS: Lists a breakdown of how the SQL Server buffer cache is divided up, including buffer activity. This is an undocumented command, and one that may be dropped in future versions of SQL Server.Example:
DBCC MEMORYSTATUS
[7.0, 2000] Updated 10-16-2005*****DBCC OPENTRAN: An open transaction can leave locks open, preventing others from accessing the data they need in a database. This command is used to identify the oldest open transaction in a specific database.Example:
DBCC OPENTRAN(‘database_name’)
[7.0, 2000] Updated 10-16-2005*****DBCC PAGE: Use this command to look at contents of a data page stored in SQL Server.Example:
DBCC PAGE ({dbid|dbname}, pagenum [,print option] [,cache] [,logical])
where:Dbid or dbname: Enter either the dbid or the name of the database in question.Pagenum: Enter the page number of the SQL Server page that is to be examined.Print option: (Optional) Print option can be either 0, 1, or 2. 0 – (Default) This option causes DBCC PAGE to print out only the page header information. 1 – This option causes DBCC PAGE to print out the page header information, each row of information from the page, and the page’s offset table. Each of the rows printed out will be separated from each other. 2 – This option is the same as option 1, except it prints the page rows as a single block of information rather than separating the individual rows. The offset and header will also be displayed.Cache: (Optional) This parameter allows either a 1 or a 0 to be entered. 0 – This option causes DBCC PAGE to retrieve the page number from disk rather than checking to see if it is in cache. 1 – (Default) This option takes the page from cache if it is in cache rather than getting it from disk only.Logical: (Optional) This parameter is for use if the page number that is to be retrieved is a virtual page rather then a logical page. It can be either 0 or 1. 0 – If the page is to be a virtual page number. 1 – (Default) If the page is the logical page number.

[6.5, 7.0, 2000]Updated 10-16-2005*****DBCC PINTABLE & DBCC UNPINTABLE: By default, SQL Server automatically brings into its data cache the pages it needs to work with. These data pages will stay in the data cache until there is no room for them, and assuming they are not needed, these pages will be flushed out of the data cache onto disk. At some point in the future when SQL Server needs these data pages again, it will have to go to disk in order to read them again into the data cache for use. If SQL Server somehow had the ability to keep the data pages in the data cache all the time, then SQL Server’s performance would be increased because I/O could be reduced on the server.The process of “pinning a table” is a way to tell SQL Server that we don’t want it to flush out data pages for specific named tables once they are read into the cache in the first place. This in effect keeps these database pages in the data cache all the time, which eliminates the process of SQL Server from having to read the data pages, flush them out, and reread them again when the time arrives. As you can imagine, this can reduce I/O for these pinned tables, boosting SQL Server’s performance.To pin a table, the command DBCC PINTABLE is used. For example, the script below can be run to pin a table in SQL Server:
DECLARE @db_id int, @tbl_id int
USE Northwind
SET @db_id = DB_ID(‘Northwind’)
SET @tbl_id = OBJECT_ID(‘Northwind..categories’)
DBCC PINTABLE (@db_id, @tbl_id)
While you can use the DBCC PINTABLE directly, without the rest of the above script, you will find the script handy because the DBCC PINTABLE’s parameters refer to the database and table ID that you want to pin, not by their database and table name. This script makes it a little easier to pin a table. You must run this command for every table you want to pin.Once a table is pinned in the data cache, this does not mean that the entire table is automatically loaded into the data cache. It only means that as data pages from that table are needed by SQL Server, they are loaded into the data cache, and then stay there, not ever being flushed out to disk until you give the command to unpin the table using the DBCC UNPINTABLE. It is possible that part of a table, and not all of it, will be all that is pinned.When you are done with a table and you no longer want it pinned, you will want to unpin your table. To do so, run this example code:
DECLARE @db_id int, @tbl_id int
USE Northwind
SET @db_id = DB_ID(‘Northwind’)
SET @tbl_id = OBJECT_ID(‘Northwind..categories’)
DBCC UNPINTABLE (@db_id, @tbl_id)
[6.5, 7.0, 2000]Updated 10-16-2005

DBCC PROCCACHE: Displays information about how the stored procedure cache is being used.Example:
DBCC PROCCACHE
[6.5, 7.0, 2000]Updated 10-16-2005*****DBCC REINDEX: Periodically (weekly or monthly) perform a database reorganization on all the indexes on all the tables in your database. This will rebuild the indexes so that the data is no longer fragmented. Fragmented data can cause SQL Server to perform unnecessary data reads, slowing down SQL Server’s performance.If you perform a reorganization on a table with a clustered index, any non-clustered indexes on that same table will automatically be rebuilt.Database reorganizations can be done byscheduling SQLMAINT.EXE to run using the SQL Server Agent, or if by running your own custom script via the SQL Server Agent (see below).Unfortunately, the DBCC DBREINDEX command will not automatically rebuild all of the indexes on all the tables in a database; it can only work on one table at a time. But if you run the following script, you can index all the tables in a database with ease.Example:
DBCC DBREINDEX(‘table_name’, fillfactor)
or
–Script to automatically reindex all tables in a database

USE DatabaseName –Enter the name of the database you want to reindex

DECLARE @TableName varchar(255)

DECLARE TableCursor CURSOR FOR
SELECT table_name FROM information_schema.tables
WHERE table_type = ‘base table’

OPEN TableCursor

FETCH NEXT FROM TableCursor INTO @TableName
WHILE @@FETCH_STATUS = 0
BEGIN
PRINT “Reindexing ” + @TableName
DBCC DBREINDEX(@TableName,’ ‘,90)
FETCH NEXT FROM TableCursor INTO @TableName
END

CLOSE TableCursor

DEALLOCATE TableCursor
The script will automatically reindex every index in every table of any database you select, and provide a fillfactor of 90%. You can substitute any number you want for the 90 in the above script. When DBCC DBREINDEX is used to rebuild indexes, keep in mind that as the indexes on a table are being rebuilt, that the table becomes unavailable for use by your users. For example, when a non-clustered index is rebuilt, a shared table lock is put on the table, preventing all but SELECT operations to be performed on it. When a clustered index is rebuilt, an exclusive table lock is put on the table, preventing any table access by your users. Because of this, you should only run this command when users don’t need access to the tables being reorganized. [7.0, 2000]Updated 10-16-2005*****DBCC SHOWCONTIG: Used to show how fragmented data and indexes are in a specified table. If data pages storing data or index information becomes fragmented, it takes more disk I/O to find and move the data to the SQL Server cache buffer, hurting performance. This command tells you how fragmented these data pages are. If you find that fragmentation is a problem, you can reindex the tables to eliminate the fragmentation. Note: this fragmentation is fragmentation of data pages within the SQL Server MDB file, not of the physical file itself.Since this command requires you to know the ID of both the table and index being analyzed, you may want to run the following script so you don’t have to manually look up the table name ID number and the index ID number.Example:
DBCC SHOWCONTIG (Table_id, IndexID)
Or:
–Script to identify table fragmentation

–Declare variables
DECLARE
@ID int,
@IndexID int,
@IndexName varchar(128)

–Set the table and index to be examined
SELECT @IndexName = ‘index_name’ –enter name of index
SET @ID = OBJECT_ID(‘table_name’) –enter name of table

–Get the Index Values
SELECT @IndexID = IndID
FROM sysindexes
WHERE id = @ID AND name = @IndexName

–Display the fragmentation
DBCC SHOWCONTIG (@id, @IndexID)
While the DBCC SHOWCONTIG command provides several measurements, the key one is Scan Density. This figure should be as close to 100% as possible. If the scan density is less than 75%, then you may want to reindex the tables in your database. [6.5, 7.0, 2000] Updated 3-20-2006*****DBCC SHOW_STATISTICS: Used to find out the selectivity of an index. Generally speaking, the higher the selectivity of an index, the greater the likelihood it will be used by the query optimizer. You have to specify both the table name and the index name you want to find the statistics on.Example:
DBCC SHOW_STATISTICS (table_name, index_name)
[7.0, 2000] Updated 3-20-2006

DBCC SQLMGRSTATS: Used to produce three different values that can sometimes be useful when you want to find out how well caching is being performed on ad-hoc and prepared Transact-SQL statements.Example:
DBCC SQLMGRSTATS
Sample Results:
Item Status
————————- ———–
Memory Used (8k Pages) 5446
Number CSql Objects 29098
Number False Hits 425490
Here’s what the above means:
⦁ Memory Used (8k Pages): If the amount of memory pages is very large, this may be an indication that some user connection is preparing many Transact-SQL statements, but it not un-preparing them.
⦁ Number CSql Objects: Measures the total number of cached Transact-SQL statements.
⦁ Number False Hits: Sometimes, false hits occur when SQL Server goes to match pre-existing cached Transact-SQL statements. Ideally, this figure should be as low as possible.
[2000] Added 4-17-2003*****DBCC SQLPERF(): This command includes both documented and undocumented options. Let’s take a look at all of them and see what they do.
DBCC SQLPERF (LOGSPACE)
This option (documented) returns data about the transaction log for all of the databases on the SQL Server, including Database Name, Log Size (MB), Log Space Used (%), and Status.
DBCC SQLPERF (UMSSTATS)
This option (undocumented) returns data about SQL Server thread management.
DBCC SQLPERF (WAITSTATS)
This option (undocumented) returns data about wait types for SQL Server resources.
DBCC SQLPERF (IOSTATS)
This option (undocumented) returns data about outstanding SQL Server reads and writes.
DBCC SQLPERF (RASTATS)
This option (undocumented) returns data about SQL Server read-ahead activity.
DBCC SQLPERF (THREADS)
This option (undocumented) returns data about I/O, CPU, and memory usage per SQL Server thread. [7.0, 2000] Updated 3-20-2006*****DBCC SQLPERF (UMSSTATS): When you run this command, you get output like this. (Note, this example was run on a 4 CPU server. There is 1 Scheduler ID per available CPU.)Statistic Value
——————————– ————————
Scheduler ID 0.0
num users 18.0
num runnable 0.0
num workers 13.0
idle workers 11.0
work queued 0.0
cntxt switches 2.2994396E+7
cntxt switches(idle) 1.7793976E+7
Scheduler ID 1.0
num users 15.0
num runnable 0.0
num workers 13.0
idle workers 10.0
work queued 0.0
cntxt switches 2.4836728E+7
cntxt switches(idle) 1.6275707E+7
Scheduler ID 2.0
num users 17.0
num runnable 0.0
num workers 12.0
idle workers 11.0
work queued 0.0
cntxt switches 1.1331447E+7
cntxt switches(idle) 1.6273097E+7
Scheduler ID 3.0
num users 16.0
num runnable 0.0
num workers 12.0
idle workers 11.0
work queued 0.0
cntxt switches 1.1110251E+7
cntxt switches(idle) 1.624729E+7
Scheduler Switches 0.0
Total Work 3.1632352E+7Below is an explanation of some of the key statistics above:
⦁ num users: This is the number of SQL Server threads currently in the scheduler.
⦁ num runnable: This is the number of actual SQL Server threads that are runnable.
⦁ num workers: This is the actual number of worker there are to process threads. This is the size of the thread pool.
⦁ idle workers: The number of workers that are currently idle.
⦁ cntxt switches: The number of context switches between runnable threads.
⦁ cntxt switches (idle): The number of context switches to the idle thread.
[2000] Added 4-17-2003

DBCC TRACEON & DBCC TRACEOFF: Used to turn on and off trace flags. Trace flags are often used to turn on and off specific server behavior or server characteristics temporarily. In rare occasions, they can be useful to troubleshooting SQL Server performance problems.Example:To use the DBCC TRACEON command to turn on a specified trace flag, use this syntax:
DBCC TRACEON (trace# [,…n])
To use the DBCC TRACEON command to turn off a specified trace flag, use this syntax:
DBCC TRACEOFF (trace# [,…n])
You can also use the DBCC TRACESTATUS command to find out which trace flags are currently turned on in your server using this syntax:
DBCC TRACESTATUS (trace# [,…n])
For specific information on the different kinds of trace flags available, search this website or look them up in Books Online. [6.5, 7.0, 2000] Updated 3-20-2006*****DBCC UPDATEUSAGE: The official use for this command is to report and correct inaccuracies in the sysindexes table, which may result in incorrect space usage reports. Apparently, it can also fix the problem of unreclaimed data pages in SQL Server. You may want to consider running this command periodically to clean up potential problems. This command can take some time to run, and you want to run it during off times because it will negatively affect SQL Server’s performance when running. When you run this command, you must specify the name of the database that you want affected.Example:
DBCC UPDATEUSAGE (‘databasename’)
[7.0, 2000] Updated 3-20-2006 ImageImage

© 2000 – 2008 vDerivatives Limited All Rights Reserved. Image

How to develop with various app.config files

When I want to test my application in Microsoft Visual Studio I want to use a alternate app.config from production. In my development environment I have other connection strings etc.

To switch between development and production configuration I have come up with the following configuration:

I want to switch configuration in the IDE and use the apropriate app.config file.

For each environment I have added a separate app.config.file to my project.

Next I have added a pre-build.bat and a post-build.bat in the solution.

The pre-build.bat file contains one command to replace app.config for the selected configuration:

copy /y %1\app.config.%2 %1\app.config

The post-build.bat file contains one command to replace the app.config with the release or “production” configuration.

Last step is to tie these components all together. Add a Pre-build and a Post-build event in the project properties page. You should be able to find these settings right clicking the project and click on properties.

The command I added to start pre-build.bat is:

“$(ProjectDir)pre-build.bat” “$(ProjectDir)” “$(ConfigurationName)”

and for the post-build.bat the command is:

“$(ProjectDir)post-build.bat” “$(ProjectDir)”