Powershell Workflow Exit Handling

As I mentioned here, I post on things that I find interesting, I don’t care what any of you think  😀

And this one just tickled memories of several things I’ve stumbled across along the way that fit together nicely (I hear an old Reese’s commercial jingle coming on).

I just read a post on the OpsMgr.Authoring newsgroup asking about detecting errors in PowerShell scripts.  PowerShell can be used to deliver discovery and property bag data as mentioned here (I saw another blog out there about it, but I can’t find the link now).  However, if the script logic itself fails or encounters an error, there are multiple layers through which the error code must be fed back to OpsMgr.

Thanks to the broad reach of the blogosphere and search engines, I am happy to be able to present the following mashup!

First, change the command XML in my MyMP sample from



<CommandLine>-Command " &amp; { .\PSDisc.ps1; exit $LASTEXITCODE }"</CommandLine>

(H/T to alemyis’ comment for that syntax)

Next, add the <ExitCodeMatches> tag to the end of the CommandExecuter module.

    <ExitCodeMatches Operator="MatchesRegularExpression">[^0]+</ExitCodeMatches>

(You can also use DoesNotMatchRegularExpression)

Next, use the PowerShell Try-Catch function available here, but instead of using Throw, which will result in a return code of 1, use exit ###.

Better yet, encapsulate it as a common function by using a custom module as the momteam blogged here!

The result will be an event that looks something like this (my script used "exit 22;" for the particular error condition):


(12/8/08 Updated with EventPolicy; Thanks to AHood for catching the error)

Posted in Uncategorized | Leave a comment

Two Things Everyone Should Know About Returning Multiple Property Bags

When using a script data source to create multiple property bags (this can be useful, for example, when the data is fed into a performance data mapper; each property bag becomes a mapped performance value), there are two common mistakes.

  1. After creating the property bag, adding its property/value pairs, and adding the bag to the ScriptAPI object, be sure to destroy the property bag before reusing the object reference variable.
  2. Use the ReturnItems call at the end to return multiple data items. 

Lines of interest are highlighted below.  These are the most common mistakes around multiple property bags.

Set oAPI = CreateObject("MOM.ScriptAPI")
Set oBag = oAPI.CreatePropertyBag()
with oBag
  .AddProperty ("PropertyName1", Value1)
  .AddProperty ("PropertyName2", Value2)
end with
set oBag = nothing
set oBag = oAPI.CreatePropertyBag()
‘ more logic and property additions here
set oBag = nothing

Posted in Uncategorized | 2 Comments

Multiple Process Monitoring/Alerting

A couple of people have asked how to use the Windows Performance Counter monitoring for multiple processes in a single rule, which allows wildcard on Object, but not on Instance.  There are a couple of blog posts on multiple services, but I couldn’t find anything on processes.

The regular Windows Performance Counter provider actually does not work with multiple processes, but the WMI performance provider does. 

In the UI, you can create a unit monitor

WMI Performance Counters -> Static Thresholds -> Single Threshold -> Simple Threshold

In the authoring console, the path is slightly different to get to the same MonitorType

WMI Performance Counters -> Single Threshold -> Simple Threshold

You can obviously choose other types.

The available MonitorTypes that will handle multiple process instances without scripting are:

Windows!Microsoft.Windows.WmiBased.Performance.ThresholdMonitorType Single threshold Monitor Type
Windows!Microsoft.Windows.WmiBased.Performance.DoubleThreshold 3-state monitor (under, between, over thresholds)
Windows!Microsoft.Windows.WmiBased.Performance.DeltaThreshold For rate-of-change monitoring
Windows!Microsoft.Windows.WmiBased.Performance.AverageThreshold Moving average changes, useful for monitoring stocks
Windows!Microsoft.Windows.WmiBased.Performance.ConsecutiveSamplesThreshold n consecutive samples over/under y threshold

What’s common to all of these is the WMI query and the mapping of WMI results to performance data.

WMI Namespace = Root\cimv2
Query = Select * from Win32_PerfFormattedData_PerfProc_Process where Name like "DLLHost%"

Will give you performance data for *each* instance of DLLHost.  When you run these through the mapper, they "fan out" so that each instance of DLLHost’s performance data gets processed through the threshold filtering.  To find out the WMI name of the performance counter you want, you can test the WMI query in wbemtest (just run wbemtest from a command line, connect to Root\Cimv2, click the Query button, paste the query, and click Apply).  For example, "% Processor Time" as shown in perfmon is "PercentProcessorTime" in WMI.  Make sure you are using the WMI name.

The mapper transforms the WMI data into the equivalent of "native" performance data.  Downstream modules such as ExpressionFilters or Alert write actions can’t tell the difference between data run through a mapper and data that came straight from Windows Performance Counter data sources.  You tell the mapper what you want it to look like.  For example, most of these counters would come from


Instance = <Process Name>

Counter = <counter, such as "% Processor Time">

Value = <what you see in the perfmon graph>

So to do this for our example, the mapper would look as follows:

ObjectName = Process
CounterName = % Processor Time
InstanceName = $Data/Property[@Name=’Name’]$
Value = $Data/Property[@Name=’PercentProcessorTime’]$

Since the name and value are returned in the WMI results, we use the $Data…$ macro and the WMI names for the fields.

Once you’re past this step, everything else should be pretty familiar with regard to setting thresholds, averages, etc.

Tags: Authoring; WMI; Windows Process; Wildcard; Operations Manager 2007

Posted in Uncategorized | 2 Comments

When to use which XPath replacements

The reason I’m given that Microsoft has not documented the XML schemas for all the data types is that they are massively complex and simply hard to document.  Baelson once told me that "the schema document is this big" holding his hands about 14 inches apart to show the size of the stack of the paper on which the schema document could be printed.

Part of the problem is that different terms are overloaded (such as "Context") and part of it is the way data types are embedded on the fly in other data types.  When you map one data type to another using the mapper modules, the original data item is preserved inside an XML tag of the resultant data item.  For example, the WMI Event Provider has several lower-level modules under the covers, including a scheduler, a WMI probe, and a mapper module that "converts" the data returned from the WMI probe from PropertyBag data to Event data.  Mappers allow you to use values from the source data (WMI-based PropertyBag) in the output datatype.  You might use a WMI object property called "LoginName" and use that in your UserName property commonly found in Windows event.

As I mentioned, though, the PropertyBag data is not discarded, it’s embedded.

System.Event.Data represents an event.  The schema seems simple.  Just some properties you would expect in an event, and their values.  There’s no room in here for all the depth of properties you can get from a WMI object.

<DataItem type="System.Event.Data" time="2008-09-11T11:40:27.6416741-07:00" sourceHealthServiceId="093355C8-0283-EED8-A6BB-393E82B1FA19">
    <LoggingComputer />
    <UserName />
    <RawDescription />
    <CollectDescription Type="Boolean">true</CollectDescription>
Trimmed for readability
    <EventDescription />

The sticky thing happens when you drill into Trimmed for readability.  You see, a whole other data item can go in there, such as an event, or in the case of the XML I am using, a PropertyBag returned by the WMI event provider.  The full thing looks like this.

<DataItem type="System.Event.Data" time="2008-09-11T11:40:27.6416741-07:00" sourceHealthServiceId="093355C8-0283-EED8-A6BB-393E82B1FA19">
    <LoggingComputer />
    <UserName />
    <RawDescription />
    <CollectDescription Type="Boolean">true</CollectDescription>
        <DataItem type="System.PropertyBagData" time="2008-09-11T11:40:27.6416741-07:00" sourceHealthServiceId="093355C8-0283-EED8-A6BB-393E82B1FA19">
            <Property Name="__CLASS" VariantType="8">AUDIT_LOGIN</Property>
            <Property Name="__DERIVATION" VariantType="8">TRC_SECURITY_AUDIT,TRC_ALL_EVENTS,ALL_EVENTS,__ExtrinsicEvent,__Event,__IndicationRelated,__SystemClass</Property>
            <Property Name="__DYNASTY" VariantType="8">__SystemClass</Property>
            <Property Name="__GENUS" VariantType="3">2</Property>
            <Property Name="__PROPERTY_COUNT" VariantType="3">24</Property>
            <Property Name="__SUPERCLASS" VariantType="8">TRC_SECURITY_AUDIT</Property>
            <Property Name="ApplicationName" VariantType="8">Microsoft (r) Windows Script Host</Property>
            <Property Name="BinaryData" VariantType="8">’ 32 (0x20),0 (0x0),0 (0x0),(‘ 40 (0x28),8′ 56 (0x38),ô’ 244 (0xF4),1 (0x1),0 (0x0),0 (0x0),0 (0x0),0 (0x0),0 (0x0)</Property>
            <Property Name="ClientProcessID" VariantType="3">3428</Property>
            <Property Name="ComputerName" VariantType="8">DBServer</Property>
            <Property Name="DatabaseID" VariantType="3">1</Property>
            <Property Name="DatabaseName" VariantType="8">master</Property>
            <Property Name="EventSequence" VariantType="3">3262</Property>
            <Property Name="HostName" VariantType="8">DBServer</Property>
            <Property Name="IntegerData" VariantType="3">4096</Property>
            <Property Name="IsSystem" VariantType="3">0</Property>
            <Property Name="LoginName" VariantType="8">MANAGE\administrator</Property>
            <Property Name="LoginSid" VariantType="8">1 (0x1),5 (0x5),0 (0x0),0 (0x0),0 (0x0),0 (0x0),0 (0x0),5 (0x5),21 (0x15),0 (0x0),0 (0x0),0 (0x0),¾’ 190 (0xBE),¾’ 190 (0xBE),Q’ 81 (0x51),‚’ 130 (0x82),¶’ 182 (0xB6),u’ 117 (0x75),19 (0x13),J’ 74 (0x4A),3′ 51 (0x33),³’ 179 (0xB3),*’ 42 (0x2A),1 (0x1),ô’ 244 (0xF4),1 (0x1),0 (0x0),0 (0x0)</Property>
            <Property Name="NTDomainName" VariantType="8">MANAGE</Property>
            <Property Name="NTUserName" VariantType="8">administrator</Property>
            <Property Name="PostTime" VariantType="7">09/11/2008 11:40:08</Property>
            <Property Name="RequestID" VariantType="3">0</Property>
            <Property Name="SessionLoginName" VariantType="8" />
            <Property Name="SPID" VariantType="3">53</Property>
            <Property Name="SQLInstance" VariantType="8">MSSQLSERVER</Property>
            <Property Name="StartTime" VariantType="7">09/11/2008 11:40:08</Property>
            <Property Name="Success" VariantType="3">1</Property>
            <Property Name="TextData" VariantType="8">– network protocol: TCP/IP set quoted_identifier on set arithabort off set numeric_roundabort off set ansi_warnings on set ansi_padding on set ansi_nulls on set concat_null_yields_null on set cursor_close_on_commit off set implicit_transactions off set language us_english set dateformat mdy set datefirst 7 set transaction isolation level read committed</Property>
            <Property Name="TIME_CREATED" VariantType="8">128656320086870915</Property>
    <EventDescription />

So you see, the System.Event.Data dataitem contains a System.PropertyBag.Data dataitem.  The property bag is returned from an event class in the SQL Server namespace.  How the heck is anyone supposed to document how to get to a piece of detail data (such as the SPID property) when the propertybag contents are dependent on a WMI class, and therefore the event detail data is dependent on the WMI class?

The <DataItem> tag appears twice in the above XML, but you don’t use it in your XML path in OpsMgr.  The first one is always replaced with "$Data/".  After that, the path follows the document path identified in BoldBlue in the XML above.  To have the SPID number 53 in your alert text, you would use


Notice that the second <DataItem> tag is actually in the path, while the first is not.

What makes it more fun is when you start to use XPathQuery’s for things like rules and monitors.  Now you drop the first <DataItem> entirely.  The WMI Event Provider module outputs System.Event.Data, so your query in the <XPathQuery> tags would be


No dollar signs.  No leading $Data path component. 


As another example, Marius blogged about the Application log data type here.  In his example, System.ApplicationLog.GenericLogDataEntry was mapped into System.Event.Data.  If you were doing the mapping, you could map the "ERROR" property captured from the application text log and stuff it into the ErrorLevel property of the Windows-style Event.  You can still get to the original property using


Parameters are generally reliably ordinal, so you can hardcode the ordinal number this way.  I ran into a lot of interesting issues with regard to this when I was doing a lot of work with the OleDB data providers.

Getting this data is often difficult.  One way to figure out what your datatype looks like is to create a rule on it at a provider level.  In other words, create an alert rule that will raise an alert on the source of your data without doing any filtering.  To get the WMI PropertyBag example, I created an alert rule that did nothing more than use the namespace


and the very basic WMI query

select * from AUDIT_LOGIN

That created an alert in OpsMgr.  The information does show up in relatively friendly display in the alerts view details, but tells nothing about the XPath.  From here, it’s off to the database!

The Alerts table contains all alerts, as you might suspect.  The Context column contains the dataitem that raised the alert.  With that detail, you could start to drill into XPathQuery syntax for filtering, and XPath for parameter replacement in the alert description.

In nearly all cases, there is a Context component of the path when the alert was raised when a monitor changed state.  Usually this is in the form of


But in the case of some other things like MonitorTaskDataType, Context is buried a little further down in the path because of the multiple levels of embedding that have happened by the time this datatype is created.  See Marius’ note on this for the XML.

Feel free to post questions.  I’m interested in getting this article a little more fleshed out.


Update: Useful link to Kevin Holman’s site: http://blogs.technet.com/kevinholman/archive/2007/12/12/adding-custom-information-to-alert-descriptions-and-notifications.aspx

Posted in Uncategorized | 4 Comments

Scheduled Monitoring

Seeing as this was asked on the newsgroups three times in less than two weeks, and seeing as I slapped together an MP for the first case that could apply to the others, I thought I should post it.

The trick is to set up a custom MonitorType that uses a scheduler to create the regular interval (e.g. "every 60 seconds"), use a scheduler filter to ensure that the data source module only collects information on certain days of the week at certain times (M-F 6am-7pm), then go through the normal MonitorType components (Data Source, Condition Detections, State results) and create a monitor that uses the custom monitor type.

See the MP linked below.




Update:  Daniele Grandini wrote a great article about this here.

Posted in Uncategorized | 2 Comments

Live Mesh Hall of Mirrors

Okay, it’s not about OpsMgr, but it was so cool I had to put it somewhere.

The new beta of Windows Live Mesh actually installs on WS08 x64, so I was able to add another machine to my mesh.  It also fixed something in the Connect To feature so that I could actually use it.  It displays a scaled view of the remote desktop and allows for control by a user at either computer.

The fun bit happened when I tried to connect to Machine B from within my remote connection to Machine A.  So it worked on displaying a scaled-down view of the remote machine, which included a scaled-down view of this machine, which now included a scaled down view of the remote machine with a scaled down view of this machine wash rinse repeat.

Print Screen didn’t work, so I had to resort to a digital camera.

Posted in Uncategorized | Leave a comment

Using Multiple SMTP Servers

Sorry for the lag in posting.  I’ve been on the road.

As mentioned, I post when I find something interesting, and we had a recent customer request that was very interesting to me.  They have multiple internal mail servers for different purposes, and want to be able to send email notifications to the right recipient at the right server for the right alert.

As with so much other stuff, the OpsMgr console doesn’t allow this.  The best you get is primary and failover servers. 

But email servers are just write actions, and email notifications are just workflows.  Seems like this should be doable.

It is.  Here’s how you do it using the authoring console.

First, import the Microsoft.SystemCenter.Notifications.Internal MP into the authoring console

Create a new composite write action that will be your SMTP server.  I’m going to call it My.Second.Smtp.Server for the purpose of this howto.  Use either the Microsoft.SystemCenter.Notification.NotificationActionAccount or another appropriate account if you have one.  Name the Write Action whatever you want.  It only has one module: Microsoft.SystemCenter.Notification.SmtpNotificationTransportAction.  There is no built in page to edit this type, so you’ll have to drop into XML.  Configure as shown:

<Configuration p1:noNamespaceSchemaLocation="C:\Documents and Settings\Administrator\Local Settings\Temp\NotificationAction – Microsoft.SystemCenter.Notification.SmtpNotificationTransportAction.xsd" xmlns:p1="http://www.w3.org/2001/XMLSchema-instance"&gt;

That’s your endpoint.

Next, create another composite write action.  I’m going to call it Send.Email.To.Second.Server.  This one is where you configure the message format, then send the information to the endpoint My.Second.Smtp.Server.  This one has a ConditionDetection module of type Microsoft.SystemCenter.Notification.SmtpNotificationContentGenerator and the My.Second.Smtp.Server module.

Again, there’s no configuration UI for the ConditionDetection module, so you have to use XML.  Configure as shown:

<Configuration p1:noNamespaceSchemaLocation="C:\Documents and Settings\Administrator\Local Settings\Temp\ContentGenerator – Microsoft.SystemCenter.Notification.SmtpNotificationContentGenerator.xsd" xmlns:p1="http://www.w3.org/2001/XMLSchema-instance"&gt;
    <Subject>Alert: $Data/Context/DataItem/AlertName$ Priority: $Data/Context/DataItem/Priority$ ResolutionState: $Data/Context/DataItem/ResolutionStateName$</Subject>
    <Body>Your message here.  You can use all the $Data/Context…$ fields you want.  Since you are editing the right MP, have a look at the DefaultSmtpAction write action module right there in your authoring console to see what the default configuration looks like</Body>

The My.Second.Smtp.Server module has nothing to configure.

You now have a Write Action that can be used in a rule workflow to send email.  At this point, go ahead and export your management pack back into OpsMgr.  You could create a rule manually, but it’s easier to let the UI do a bunch of rule stuff for you such as figuring out the GUID of your recipient.  You can also take advantage of the PowerShell script to create subscriptions for specific rules.  (If you create your own rule, make sure to set the Category to "Notification")

Create a new notification as you normally would.  Once you’ve done that, re-import the Notification management pack back into the Authoring Console.

Find the rule you just created in the Health Model – Rules section.  It’s going to have some awful GUID name that starts with "Subscription…".

Remove the Write Action and replace it with Send.Email.To.Second.Server


And after:


Save changes and export the management pack back into OpsMgr.  The notification rule will now send email to your configured recipient via the second SMTP server.

If you have an easier way to do this, please let me know.  This was a quick-and-dirty proof of concept, and I have a feeling in my gut that there’s a more elegant way to do it, and possible a more elegant way to maintain it.  All this importing and exporting is more than should be necessary.  I just don’t have time to refine it right now…

Posted in Uncategorized | Leave a comment