Thursday 20 November 2008

MySQL Triggers

My recent 'baptism of fire' with MySQL triggers has revealed a couple of differences with their cousins in the SQL Server and Oracle domain.

1) You need seperate triggers for UPDATE, INSERT and DELETE. Oracle and SQL Server will let you combine these into one trigger.

2) You cannot create triggers dynamically using a Procedure. The Execute command in the stored procedures works fine when creating a table, but try creating a Trigger. The parser throws an error stating that you can not run these types of commands.

The real problem for me has been 2). I often rely on writing procedures to apply auditing to a table (composed of audit table and trigger to insert the data from the main table into the audit table).

Thursday 6 November 2008

Creating copies of tables in Oracle, SqlServer and MySQL

As the sub-title of my blog suggests, I am from an Oracle background. But I also have dealings with SQL Server and MySQL databases. In fact the latest project I'm working on is going to have to work with all three databases!

One issue I have encountered recently is how to create copies of tables in SQL Server and MySQL. In Oracle you would use the AS keyword in the CREATE table statement and select from the table that is being copied. For example:

CREATE TABLE myCopyTable AS
SELECT * FROM myOriginalTable;

You can add a WHERE clause to restrict the records that are included in the copy. Making this WHERE clause into a condition that is not achievable (such as WHERE 1 =2) will result in a copy of the table without the data.
Note:
This copy does not include any indexes/constraints on the original table.

In MySQL it is similar but you do not need the AS keyword. For example:

CREATE TABLE myCopyTable
SELECT * FROM myOriginalTable;

Alternatively you can actually create an exact copy including all constraints and indexes. This is achieved by using the LIKE keyword.

CREATE TABLE myCopyTable
LIKE myOriginalTable;

Note: This won't take the data across, you would have to insert that afterwards
(e.g. INSERT INTO myCopyTable SELECT * FROM myOriginalTable;)

Finally, SQL Server... which wins the prize for the strangest implementation. You actually use a SELECT command to create your new table :o

SELECT *
INTO myCopyTable
FROM myOriginalTable;

I really don't like the way this is implemented as there is no mention of a table being created in the SQL statement. Even though this is what the user is trying to achieve.
Anyhow, no point complaining at least they provided a way of copying a table :)

Friday 31 October 2008

Auto numbering fields in Oracle

Auto numbering fields often come up in debates between fans of Oracle and SqlServer. The SqlServer devotee will argue that their database has the advantage as it provides easy to use auto numbering fields. The Oracle believers come-back will undoubtedly be... "ah but you can use Triggers".

So how do you? Luckily, I've just been implimenting this very thing in Oracle, so I thought I'd share with you my solution.

First off, let me define a table that the solution will be based upon....

CREATE TABLE TestTable
( TestTable_PK NUMBER(10) NOT NULL PRIMARY KEY,
AValue VARCHAR2(10)

The first column (TestTable_PK) is the one which we want to populate with an auto numbering value.

# Applying auto-numbering:


The auto-numbering solution is made up of two components:
  1. The Sequence - to provide the auto numbering value.
  2. The Trigger - to add the auto numbering value to the row.
So, let us start by defining the Sequence....

CREATE SEQUENCE TestTable_SEQ
MINVALUE 0
MAXVALUE 999999999999999999999999999
START WITH 1
INCREMENT BY 1;

...and then we can define our auto numbering Trigger.

CREATE TRIGGER TestTable_TRG
BEFORE INSERT ON TestTable
FOR EACH ROW
BEGIN
SELECT TestTable_SEQ.nextval INTO :new.TestTable_PK FROM dual;
END;

The Trigger has to occur before the insert otherwise the Primary Key unique constraint will throw an error. It also has to fire for each row because if we are inserting multiple records we want a different sequence value for each. The :new variable in the trigger references the row contents that is inserted to the table after the Trigger has fired.

# Returning the auto-numbering value:


Okay, so now we have out auto numbering working, how can we insert data and return the auto-generated number? This can be achieved by using the RETURNING keyword on the INSERT statement. For example:

INSERT INTO TestTable
(aValue)
VALUES
('Some Data')
RETURNING TestTable_PK INTO :value;

The :value is a variable that can be queried in the client to get the returned field.

# Generic Solution:

In the code block below I have included the code to PL/SQL package that I have written to automatically add auto-numbering to a specified table. I do make an assumption in my code that the column that is being used as a auto numbering field is called _PK. You could write to query the data dictionary views to return the actual Primary Key, but this could cause problems if the Primary Key includes more than one field.

Hope you find it useful.

CREATE OR REPLACE PACKAGE PKG_Utilities
AUTHID DEFINER
AS
/* ------------------------------------------------------
Add Auto numbering to the Primary Key of the table.
------------------------------------------------------ */
PROCEDURE proc_AddAutoNumbering(
tableName IN VARCHAR2);
END;
/

CREATE OR REPLACE PACKAGE BODY PKG_Utilities
AS
/* ***********************************************************************
FORWARD DECLARATION OF PRIVATE METHODS
*********************************************************************** */
FUNCTION func_AddAutoNumberingSequence(
tableName IN VARCHAR2) RETURN VARCHAR2;
PROCEDURE proc_AddAutoNumberingTrigger(
tableName IN VARCHAR2,
sequenceName IN VARCHAR2);
FUNCTION func_ObjectExists(
/* Does the object exist in the current schema? */
objectName IN VARCHAR2,
objectType IN VARCHAR2)
RETURN BOOLEAN;
FUNCTION func_GetSafeIdentifier(
/* Returns an identifier under 26 characters in length
that can be concatenated with a prefix such as _TRG and
still be under the 30 character limit for Oracle object names */
objectName IN VARCHAR2)
RETURN VARCHAR2;
/* ***********************************************************************
PUBLIC METHODS
*********************************************************************** */
PROCEDURE proc_AddAutoNumbering(
tableName IN VARCHAR2)
IS
sequenceName VARCHAR2(200);
BEGIN
sequenceName:= func_AddAutoNumberingSequence(tableName);
proc_AddAutoNumberingTrigger(tableName, sequenceName);
END;
/* ***********************************************************************
PRIVATE METHODS
*********************************************************************** */
FUNCTION func_AddAutoNumberingSequence(
tableName IN VARCHAR2)
RETURN VARCHAR2
IS
execute_sql VARCHAR2(1000);
sequenceName VARCHAR2(30);
BEGIN
sequenceName:= func_GetSafeIdentifier(tableName) || '_SEQ';
IF (NOT(func_ObjectExists(sequenceName, 'SEQUENCE'))) THEN
execute_sql:= 'CREATE SEQUENCE PremierEnvoy.' || sequenceName || chr(10) ||
'MINVALUE 0 ' || chr(10) ||
'MAXVALUE 999999999999999999999999999' || chr(10) ||
'START WITH 1' || chr(10) ||
'INCREMENT BY 1';
EXECUTE IMMEDIATE execute_sql;
END IF;
RETURN sequencename;
EXCEPTION
WHEN others THEN
raise_application_error(-20001, 'Failed to Add Sequence - ' || execute_sql);
END;

PROCEDURE proc_AddAutoNumberingTrigger(
tableName IN VARCHAR2,
sequenceName IN VARCHAR2)
IS
triggerName VARCHAR2(30):= func_GetSafeIdentifier(tableName) || '_TRG';
execute_sql VARCHAR2(2000):=
'CREATE TRIGGER PremierEnvoy.' || triggerName || chr(10) ||
'BEFORE INSERT ON PremierEnvoy.' || tableName || chr(10) ||
'FOR EACH ROW ' || chr(10) ||
'BEGIN ' || chr(10) ||
' SELECT ' || sequenceName || '.nextval' || chr(10) ||
' INTO :new.' || tableName || '_PK FROM dual;' || chr(10) ||
'END;';
BEGIN
IF (NOT(func_ObjectExists(triggerName, 'TRIGGER'))) THEN
EXECUTE IMMEDIATE execute_sql;
END IF;
END;

FUNCTION func_ObjectExists(
objectName IN VARCHAR2,
objectType IN VARCHAR2)
RETURN BOOLEAN IS
objectCount NUMBER;
BEGIN
SELECT count(object_name)
INTO objectCount
FROM user_objects
WHERE UPPER(object_name) = UPPER(objectName)
AND UPPER(object_type) = UPPER(objectType);
RETURN (objectCount > 0);
END;

FUNCTION func_GetSafeIdentifier(
objectName IN VARCHAR2)
RETURN VARCHAR2
IS
BEGIN
IF (LENGTH(objectName) < 25) THEN
RETURN objectName;
ELSE
RETURN SUBSTR(objectName, 0, 25);
END IF;
END;

Thursday 23 October 2008

Deep Copies in .NET

A problem that I often have when writing WCF services is that WCF complains when you try to send an Object that has been cast to an ancestor over the wire (as a DataContract). WCF responds by getting confused as to what type of object it is returning.

The only solutions is to create a new instance of the parent class and copy all the fields/properties from the child. This is effectively making a Deep Copy of the object... so it got me thinking about how I could impliment a Deep Copying function in .NET.

My Solution has been implimented as an Exension Method as I want it to be called on a class, but I don't want to have a distant ancestor to all my classes that contains this method. The method itself uses reflection to work out the fields and property values that it needs to copy across.

The code is as follows:
public static object GetDeepCopy(this T originalObject, Type newObjectType)
{
object newObject = Activator.CreateInstance(newObjectType);

//copy fields
FieldInfo[] fields = newObject.GetType().GetFields();
int i = 0;
foreach (FieldInfo field in fields)
{
fields[i].SetValue(newObject, field.GetValue(originalObject));
i++;
}

//copy properties
PropertyInfo[] properties = newObject.GetType().GetProperties();
i = 0;
foreach (PropertyInfo property in properties)
{
properties[i].SetValue(newObject, property.GetValue(originalObject, null), null);
i++;
}
return newObject;
}
This class can be used against any classes if it's namespace is included in the code in which you are using the objects. Here is an example of it in use:

ChildClass one = new ChildClass()
{
Active = true,
Address = "HHWWH",
testone = Testy.two,
otherstuff = "sdsadasd"
};

ParentClass two = (ParentClass)one.GetDeepCopy(typeof(ParentClass));

As you can see, the routine requires that a type is passed in and also the returned value must be cast to the same type.

Monday 13 October 2008

Checking if a string is empty

In C# there are a variety of ways of checking if a string is empty. The main methods I have seen people use are:
  1. myString == String.Empty;
  2. myString == "";
  3. myString.Length == 0;
So, which is the best one to use? FxCop provides the answer in it's performance rule TestForEmptyStringsUsingStringLength.

As the name probably gives away, the most efficient method is 3. myString.Length == 0; Or alternatively we can use the String.IsNullOrEmpty() method to test that the string isn't null at the same time.

Thursday 9 October 2008

.NET Role Based Security

I am currently looking at implimenting Users and Security in a new product. Ideally I want to use a Role Based Security (RBS) system, so I've been revisiting the role Based security classes and interfaces that exist in the .NET framework. I have to say that I believe that Microsoft have got this area of the .NET framework wrong *. This is why...

RBS has three components:
  1. The Users
  2. The Roles
  3. The Permissions (AKA rights)
Users are assigned to 0..* roles and roles contain 0..* permissions.
Permissions can be assigned to many roles.
(see the diagram on the wikipedia page here).

In the .NET framework Microsoft have developed classes such as WindowsPrincipal and GenericPrinciple to test whether a User is a member of a particular Role.
For example:
GenericIdentity identity = new GenericIdentity("IMitchell");
string[] roles = new string[] { "Administrator", "Manager" };
GenericPrincipal principle = new GenericPrincipal(identity, roles);

//Imperative Security Check
bool able = principle.IsInRole("Administrator");

//Declarative Security Check
[PrincipalPermission(SecurityAction.Demand, Role="Manager")]
private void DoSomething()
{
....
}
I don't think this is right... I don't think that roles should play a part in the programming. Really you should be checking if the user has the permission to do a certain task. Whether they have that aquired that right from being in role "Administrator" or role "Manager" is irrelevant to the program.

Also, by including security checking based on role names you are imposing these on the end users... who would be far better off defining their own roles to reflect their own organisational structure.

A good example of a good RBS system in operation is in the Oracle Database. In the database various permissions to database objects are automatically defined by the database (e.g. select from aTable, execute stored procedure). Oracle leaves it to the Database Administrator to define their own roles and users and build the association. So one can assume that underneath the hood Oracle is actually checking whether the User has the permission rather than being concerned with how they got it!

I'm not going to be overly critical though, as Microsoft have provided the IPrinciple and IIdentity interfaces to help us build custom implementations of the RBS. I will be using these to develop a permission based checking security system. If I managed to develop this in a non-product specific way I'll post it on this blog in the near future.



* This of course is only my opinion ;)

Friday 26 September 2008

Why FxCop is essential

As part of my recent work configuring an automated build server I was asked to ensure that FxCop was run as part of the process. I have had, in previous jobs, experience of FxCop in it's integrated form (as the code analysis tool in Visual Studio Team System). But I have to say it was something that was never used that often and I have to admit most of us ignored what it was saying.

We decided to aim to reduce the FxCop errors in our build report to zero. It seemed a little harsh at first, but I think this is going to seriously help our coding standards.

First off (for all you doubters out there), FxCop is not always right. There are occasions that it is unhappy about something that cannot be changed. For example, we use NUnit for our unit tests and FxCop loves to complain that many of the test methods should be static methods (as they don't reference anything in the class). However, if you convert these to static methods NUnit fails.

Luckily FxCop provides a means to ignore methods that are breaking rules. This is how you do it:
  1. In Visual Studio, open up the .NET project properties and select the Build tab. In the conditional compilation symbols box add CODE_ANALYSIS.
  2. Add using System.Diagnostics.CodeAnalysis; to the code file.
  3. Above the method that is giving the error add a [SuppressMessage] attribute, referencing the FxCop rule assembly, The rule Id/Name and a Justification (this is optional but I strongly recommend this is used so that future developers will know why this isn't being checked). Below is an example from one of my NUnit test files to avoid the previously mentioned static problem:
[SuppressMessage("Microsoft.Performance", "CA1822:MarkMembersAsStatic",
Justification = "FXcop doesn't realise that this Nunit test method cannot be static")]

You can also turn off particular rules from the FxCop configuration tool, or as a command line argument (e.g. /ruleid:-Microsoft.Design#CA1014 The minus before the rule name indicates that this rule should not be used).

This suppression is useful, but don't just suppress every warning and error. This will lose any benefit of running FxCop!

So, what sort of genuine errors has FxCop found? Many are small things like picking up incorrect casing in names of methods, parameters and other class members. These may seem insignificant to the project as a whole but they are key to future developers understanding the project.

Some of the more interesting errors that FxCop has returned have involved Globalisation issues. I have to admit that this is something that is often missed in project development. For example a recent FxCop report noticed that I created a DataTable without specifying a Locale... so what you might say. However, if this isn't set to CultureInfo.InvariantCulture it could effect any sorting performed on this DataTable by users in different locations!

In conclusion, we should be using code analysis tools like FxCop. We may think that our code is perfect, but as the analysis proves, it often is not.

Friday 12 September 2008

Cryptographic failure while signing assembly (MSBuild / CruiseControl .NET)

I came across an interesting error through CruiseControl, when I updated one of our assemblies to be signed with a Strong Named Key File.  The CruiseControl build report stated:

errorCS1548: Cryptographic failure while signing assembly 'c:\myCheckoutArea\myProject\obj\Debug\myProject.dll' -- 'Access is denied. '

After much searching on the Internet and several red herrings later I finally stumbled across a solution.  Simply change the permissions on the %SystemDrive%\Documents and Settings\All Users\Application Data\Microsoft\Crypto\RSA\MachineKeys so that the user has full access to this folder.

Wednesday 3 September 2008

NUnit through Visual Studio

As you may know Visual Studio (VS) has quite nice integration with MSTest projects.  It has a test list editor and view that allow you to see the tests in the solution and choose which to run.  If any fail you can debug through the tests using the VS debugger.

However, as you may have gathered from previous posts we are using NUnit, basically beacause we don't want to buy an extra VS licence for our automated build machine (yes, thanks Microsoft for not including MSTest in the .NET SDK!).

Thankfully, I have disovered a nice add in for VS called NUnitForVS.  This allows NUnit tests to be run and debugged in the same way that MSTest projects are.

Through my install I did discover a couple of issues:

1) You have to manually edit the NUnit Test projects .csproj file to contain a ProjectTypeGuid element.  The one specified on the linked website did not work with VS 2008, so I checked one of my MSTest projects and found the following value which did work...

ProjectTypeGuid {3AC096D0-A1C2-E12C-1390-A8335801FDAB};{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}ProjectTypeGuid

2) My Unit Tests didn't appear at first in the Test View.  You have to rebuild the software for it to recalculate the list.

Hint - For ease, I have added a new template project for NUnit tests.   This was created as a Class Library, but I have manually edited the .csproj file to contain the ProjectTypeGuid element I mentioned above. 


Friday 29 August 2008

NUnit missing assembly reference error

Recently I have been getting a few errors running Nunit on our automatic build server. Nunit is being called through an MSBuild file, which is being called by CruiseControl.NET. The CruiseControl build log has been recording the error messages below:
EventLogListenerTest.cs (6,7): errorCS0246: The type or namespace name 'NUnit' could not be found (are you missing a using directive or an assembly reference?)
EventLogListenerTest.cs (18,10): errorCS0246: The type or namespace name 'TestFixtureSetUp' could not be found (are you missing a using directive or an assembly reference?)
EventLogListenerTest.cs (18,10): errorCS0246: The type or namespace name 'TestFixtureSetUpAttribute' could not be found (are you missing a using directive or an assembly reference?)
EventLogListenerTest.cs (25,10): errorCS0246: The type or namespace name 'TestFixtureTearDown' could not be found (are you missing a using directive or an assembly reference?)
EventLogListenerTest.cs (25,10): errorCS0246: The type or namespace name 'TestFixtureTearDownAttribute' could not be found (are you missing a using directive or an assembly reference?)
These errors are only happened when I ran CruiseControl as a service... so I logically assumed that this must be a security issue with the Local service account not being able to see the Nunit assemblies. However, the issue was not resolved by running the service as the logged in user.

The solution I found was to manually add the Nunit.Framework.dll from the Nunit directory in "c:\program files" to the Global Assembly Cache (GAC). This resolved all the missing assembly reference" errors.

Wednesday 20 August 2008

XML Namespaces and LINQ to XML

Recently I've been writing a program to automatically generate an MSBuild file containing all of our projects. This is then used by CruiseControl.NET to continuously build and test our source code.

I decided to use the new LINQ to XML to write the MSBuild file, but came across a problem when I ran the resulting file through MSBuild. The error message I received was:
The element beneath element may not have a custom XML namespace.

Investigation revealed that the root node of the MSBuild XML has a XML namespace (xmlns="http://schemas.microsoft.com/developer/msbuild/2003") and when I added a new XElement to the root element it automatically added a blank namespace attribute to the XElement(xmlns=""). Thus causing the error message.

The solution is to always add a XNamespace object of the SAME address to the XElement you are adding (see below).
XNamespace _xlmns = "http://schemas.microsoft.com/developer/msbuild/2003";

XElement root = new XElement(_xlmns + "Project",
new XAttribute("DefaultTargets", "Build");

XElement newBuildTarget = new XElement(_xlmns + "MSBuild",
new XAttribute("Projects", notShownSolutionPath),
new XAttribute("Targets", notShownTargets));
root.Add(XElement);

Bizarrely enough this will actually then remove the xmlns attribute from the added element. But thankfully it solves the problem.

Thursday 14 August 2008

Windows Management Instrumentation (WMI) queries

Often when coding you need to access the configuration of the server that your software is running on. This might be to determine if a particular device is connected or just to provide remote support information about the machine that the problem occured on.

Microsoft have built a feature called WMI into the Windows Driver Model. This provides an interface which you can query (using WMI queries) to access information about the system and it's components.

.NET provides a namespace (System.Management) that contains classes to query and access the results returned. The example below shows a simple example of using a query in C# to access Drive information on a specific machine.
using System.Management;

public static void DisplayDrives()
{
ManagementScope scope = new ManagementScope("\\\aMachine\\root\\cimv2");
ObjectQuery query = new ObjectQuery(
"SELECT Name, Size FROM Win32_LogicalDisk where DriveType = 3"
);
//N.B. 3 = local fixed drives ONLY

ManagementObjectCollection
drives = query.Get();
foreach(ManagementObject drive in drives)
{
Console.WriteLine(string.Format("Drive:{0} Size={1}",
drive["Name"].ToString(), drive["Size"].ToString()));

}
}
As you can see it's quite straightforward to use. However, one of the problems I have come across is how you know that the Win32_LogicalDisk is the object that contains information about the drives?

I use this page, which I found hidden away in the MSDN (I won't start ranting about how difficult it is to find anything in MSDN). It contains a break down of the different types of objects that are available for querying. You can even view the properties they contain.

I've just come across this application that Ben Coleman has created for running WMI queries on local or remote machines. It's quite useful for testing your WMI queries before adding them into your code.

Wednesday 13 August 2008

Free Project Management Tools

I have spent the last few days evaluating free project management tools, as we have decided that we'd like to plan out our latest development in a bit more detail. I've never really been much of a fan of Microsoft Project (a bit to overly complicated), so I have been looking for something that is easy to use and web based.

The solution that I have decided to use is Mingle by Thoughtworks. This is free if you have 5 users, or if you are lucky enough to work for a non-profit or charity. The only requirements are a web server with a mySQL 5.00 database.

Mingle is based around a wiki based system and uses items it calls "Cards" to store information about the project. For example, you can create a card for each of your Use Cases and then add a card property of Requirement status which would allow you to move these cards through pre-defined project states (such as Analysis, Design, Implementation and Completed). You can even create card trees, which in this example would allow you to break the Requirement into software engineering tasks (such as Code, Unit Test etc).

Cards can also be used to keep track on defects and other project items. Mingle is highly configurable and can be adapted to a variety of project management uses. Thoroughly recommended.

Wednesday 6 August 2008

Custom Event Log Trace Listener

We all love .NET tracing, who wouldn't? It's so easy to use (see here if you've never come across tracing before).

However, recently I've come across a mild annoyance. The standard .NET event log listener logs everything as info... great!

Luckily the .NET framework provides the ability to write your own custom trace listeners, so I decided to write my own. I have included the code for this trace listener below:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Diagnostics;

namespace aCompany.Diagnostics
{
public class EventLogListener: TraceListener
{
#region Field(s)
private EventLog _eventLog;
#endregion

#region Constructor(s)
public EventLogListener(string sourceName)
{
_eventLog = new EventLog();
_eventLog.Source = sourceName;
}
#endregion

#region Public Method(s)
public override void Write(string message)
{
_eventLog.WriteEntry(message, EventLogEntryType.Information);
}

public override void WriteLine(string message)
{
this.Write(message + Environment.NewLine);
}

public override void Fail(string message)
{
_eventLog.WriteEntry(message, EventLogEntryType.Error);
}

public override void Fail(string message, string detailMessage)
{
_eventLog.WriteEntry(message + Environment.NewLine + detailMessage, EventLogEntryType.Error);
}

public override void WriteLine(string message, string category)
{
EventLogEntryType entryType = (EventLogEntryType)Enum.Parse(typeof(EventLogEntryType), category);
_eventLog.WriteEntry(message, entryType);
}
#endregion
}
}

This listener assumes that an info event should be created if a Trace.Write() or Trace.WriteLine() is called. An error event is created if Trace.Fail() is called.

There is also a WriteLine() that takes a category as an argument. I have used this to represent a string of EventLogEntryType, so this can be used to created any type of event.

The event source that the events will be added to is passed in as an argument to the constructor.

Hope you find this useful.

Monday 28 July 2008

Visual Studio 2008 Test List Editor Bug

Monday morning: After another weekend of travelling around the country (this time London) I have returned to work and discovered a juicy bug in Visual Studio 2008. (VS)

I noticed that when I closed my projects and reopened them, all the Unit Test lists disappeared from my Test List! Investigation revealed that VS had actually replaced the vsmdi file (myProject.vsmdi) with a new one (myProject1.vsmdi). To get the original test list back you have to delete this new one and re-add the original one from the project directory.

A temporary work around to get around the issue is to close the Test List Editor prior to closing the solution/project or VS.

I've checked on Microsoft's connect website and this has already been reported. Interestingly enough I came across this post by Aaron Stebner (here!) about best practice when logging a Bug with Microsoft.

Friday 25 July 2008

In the beginning there was light...

I have been meaning to start putting together a blog for many years now. Finally, one quiet day in the office I have finally put this together!

So to start I have prepared a question and answer session with myself. Don't worry I haven't got multiple personalities.

Q. Why as a so called software engineer (SE) have you used the google blogging site rather than create your own site using this or that blogging software?
A. Okay I'm a bit lazy... also on a more serious note I think there is a mindset amongst many SE to always go and reinvent the wheel. If google are providing this (seems to be decent) service free of charge then why should I mess around with my own site :).

Q. What is this blog for? Do we really need to hear the rantings of another person?
A. Well I want to use this to share my experiences of Software Development.. and hopefully these will help other SE solve there issues. Also, I want to add some more real life examples of using analysis and design to build "quality" software. It seems that there is a real lack of blogs that tackle this subject.

Q. Who are you and why should we listen to you?
A. I am a Senior Software Engineer who is currently working for a small expense managment firm in Manchester, UK. Before this role I was working for large firms in the Pharmaceutical sector. I am currently working with C# and ASP.NET and have previous experience working with Delphi and Java. Through all my career (6 years) I have used UML and Oracle Databases. I have also been involved in training SE in UML and Object-Oriented analysis and design.
In answer to the other question... errr... you don't have to listen to me if you don't want to :)