Strategy – SNCMA https://sncma.com A ServiceNow Architecture Blog Wed, 12 Jun 2024 00:53:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://i0.wp.com/sncma.com/wp-content/uploads/2021/06/cropped-gear.png?fit=32%2C32&ssl=1 Strategy – SNCMA https://sncma.com 32 32 194767795 Wherefore Architecture? https://sncma.com/2024/06/12/wherefore-architecture/ https://sncma.com/2024/06/12/wherefore-architecture/#respond Wed, 12 Jun 2024 00:12:27 +0000 https://sncma.com/?p=1066 If ServiceNow is built to support Citizen developers, why do we need ServiceNow architects?

“Thinking about design is hard, but not thinking about it can be disastrous.” – Ralph Caplan

Introduction

For almost 14 years in the ServiceNow space, and across a rapid expansion of the exosystem, it has been interesting to observe and analyze various organization’s approaches to developing and maintaining their ServiceNow environment. Specifically, how do organizations manage the inflow of business needs, the distribution and velocity of development, configuration and administrative work, and the ongoing maintenance of the platform? As the footprint of ServiceNow has expanded conjunctionally with the expansion of the business functions the platform supports, I increasingly see divergence in these strategies. I’m writing this article to articulate these strategies and provide my view of how each succeeds and fails, along with my recommendations for the correct strategy given a company’s view of ServiceNow.

Management Approaches

There really are two ends of the spectrum for how ServiceNow environments are managed. At one end is the idealized view and what ServiceNow itself espouses: Use of Idea and Demand Management to receive and vet business requirements, an oversight board – which ServiceNow calls a “Center of Excellence” – who does the vetting and prioritization of these requirements, Agile and PPM to manage the development work to fulfill these requirements, and an operational organization that handles release management, break/fix work, upgrades, and performance and security of the platform. If you’ve got your thinking cap on while reading this, you’ll quickly sense that this is intended and works best in the largest ServiceNow implementations, where the platform is a large part of an overall enterprise strategy for a company.

The other end of the spectrum stems largely from traditional IT functions; that is, a purely operational model and mindset where all development is treated as one-off break/fix/enhance type work. Where “keep the lights on” is the primary and sometimes only strategy. In these organizations, ServiceNow development work is typically handled through Incident and/or Enhancement processes, and each task is designed, developed and released “in a silo”, usually without thought to larger strategic initiatives. In other words, the view of the development does not extend beyond the scope of the need elucidated.

With a 25 year career in IT, I’m certainly aware of and sympathetic to this mindset. I find it particularly prevalent in MSP or MSP-like organizations. It’s not that the people running these organizations intend to be “unstrategic” (not a word), it’s what they know. These mindsets are built over years and decades of running IT as an operational entity.

There is a cost to doing business this way – and this is the crux of this article. When you implement under an operational mindset, you necessarily build everything as one-off. Critically, there are no design or architecture considerations taken into account, which means there are concerns for platform maintenance, stability, health and optimization. These can range from the simplest quirks like inconsistent forms and fields, and re-creations of code logic, to large-scale issues with performance and user experience.

Examples

Here are some specific examples of development done without design or architecture prior to “hands-on” work:

  • A developer customizes an out of box ServiceNow application when a custom application would have served the requirement better. This leads to upgrade issues.
  • A developer builds a security requirement using client-side functionality, which is pseudo-security. This security hole is exposed when using integrations and server-side functionality to the same data.
  • A series of requirements for a single application are developed as one-offs. After these are implemented, the UI/UX experience is compromised, as now the form design is cluttered and out of sync with other applications. Adding UI logic and many related lists hinders the performance of form loads.
  • One developer uses Workflow, another Flow Designer, another a series of Business Rules, and another a series of Glide Ajax client scripts, all to implement similar requirements. Maintenance becomes hyper complex as each change or fix is a one-off “head scratcher”; “Now where is this functionality??”

I can argue that Agile is a contributor to this problem. Not the methodology itself, but the incomplete usage of the methodology. I often see organizations going no further than Stories to manage their work. While Stories done correctly are ideal for “state the business requirement, not the technical how” of documentation, without using Epics to group requirements into a cohesive release, and more importantly, without architectural design overseeing the Epic, the Stories are developed in silos and lead to the issues noted above.

Best Practice

In my experience, the best practice is to have an architectural or design review of all requirements prior to development. Some may only need a cursory review to confirm it can be built as a one-off. Others may need a directed technical approach because there are several ways it could be built, and a consistent approach platform-wide is best for long term maintainability. And some may need a complete analysis of pros and cons, buy versus build, custom application versus out-of-box in order to build the right solution for the business need and the platform sustainability.

I’ve included a diagram below that shows the “idealized” process flow, including a Center of Excellence that fulfills this function:

Center of Excellence

The concept of a Center of Excellence, or at least some level of architectural oversight, is not meant to be onerous or a process bottleneck. This is a common concern organizations have, and a reason they don’t do it. My argument is that the operational sweet spot for such a function lies somewhere in the middle of the spectrum: Organizations are free to be as fast and independent with business requirements as they choose. The oversight part of the process is a simple architectural design review of all Stories (requirements) prior to development, with the end result a proper technical approach to development. A good architect will quickly recognize when there are multiple approaches to implement a requirement and provide guidance on the correct approach, taking into consideration all aspects mentioned previously. If the Agile methodology is being used, this can be part of the grooming process.

The diagram above is one I drew for where the Center of Excellence lives in the overall development process, between the requirements gathering, and the execution, either as operational one-offs or as larger project-type initiatives.

ServiceNow’s Documentation on Center of Excellence

In the end, it comes down to an organizational decision, even if not made consciously: “Do we spend the time up front to ensure things are developed in a cohesive platform strategy way, or do we dedicate more time, money and resources to fixing issues when they (inevitably) rise in Production?”  The simple analogy is working to prevent forest fires, or dedicating resources to fighting forest fires.

]]>
https://sncma.com/2024/06/12/wherefore-architecture/feed/ 0 1066
What We’ve Got Here Is Failure to Communicate – Part 2 https://sncma.com/2023/05/09/what-weve-got-here-is-failure-to-communicate-part-2/ https://sncma.com/2023/05/09/what-weve-got-here-is-failure-to-communicate-part-2/#respond Tue, 09 May 2023 21:39:37 +0000 https://sncma.com/?p=962 In Part 1 of this article, I delved into Inbound and Outbound design considerations. Now, in Part 2, I’ll cover considerations for a true eBonding type integration as well as other general tips I’ve learned through the years building integrations.

eBonding Design Considerations and Good Practices

As mentioned previously, the example I’m working from is a bi-directional application to application integration, meaning that the systems are integrating application records throughout the lifecycle of that application’s workflow. For example, an Incident in system X that integrates with a ServiceNow Incident and exchanges updates throughout the life of both incidents, regardless of who has ownership of the resolution. Many know this concept as “eBonding”. Simply put, this is integration of both data and process, where what data is exchanged, and when, are as a result of process and may also influence process.

The technical designs I’ve outlined above work very well for eBonding, and are in fact designed to work with this practice. In addition to the technical aspects, here are other considerations when designing an integration solution for eBonding:

  • Both systems have to agree on the field mappings and data types. (No different than any other integration.)
  • Both systems have to agree when mapped fields can be updated. This is especially important for things like the ServiceNow “state” field, which either controls or is controlled by workflow. In our Incident example, often the only states that are allowed to be set by the integration are canceled or resolved. Other states may change in the other system but aren’t automatically updated by the integration as it may affect workflows, SLAs, etc. Rather, information may be included in a work note so each system is aware of activity in the other, but the process is not potentially adversely affected by it.
  • The integration needs to include mapping translations for field values that don’t match in usage across the systems. For example, if ServiceNow uses Priorities 1-4 and System X uses Severity 1-10, you’ll need to create a mapping matrix to map System X’s Severities into ServiceNow’s Priorities, and vice versa. (Also consider States, Categories, etc.)
  • You’ll need to consider how Reference fields get populated and integrated, but I’ll discuss that more in the Good Practice Tips.

I’m including diagrams from the AT&T Incident eBonding I built for ServiceNow below. It details the integration flow for two scenarios: A “Proactive” incident initiated by AT&T, and a “Reactive” incident sent to AT&T. In both scenarios AT&T is the owner of the incident – responsible for the resolution, as the use case is AT&T owns the customer’s network and in the incident is network related.

Note the listing of updateable fields, and when, as well as uni and bidirectional flows of data.

Proactive Ticket
Diagram 2: Example eBond flow for an AT&T initiated Incident into ServiceNow
Reactive Ticket
Diagram 2: Example eBond flow for an AT&T Incident initiated in and by ServiceNow

The keys to a successful eBonding integration are the discussion of, and agreement on, the data what and when that will flow between the systems, and the rigorous test planning and testing of all lifecycle scenarios. These are vital to ensure you don’t break existing internal processes already developed and running in your ServiceNow environment.

Other Good Practice Tips

In addition to the primary design considerations outlined above, I recommend the following:

  • While security is of the utmost importance, and is often the thing customers think about first, try to design and build your integration without the security layer, or use the most basic security possible. This allows you to prove out the design and confirm the connectivity first, and assumes you have sub-production environments to develop and test in. Security can almost always be layered in as a second step. This eliminates a layer to troubleshoot as you iterate your development.
  • You’ll need to consider and account for integrating ServiceNow Reference fields *. As you know, these are fields that are stored as sys_ids in the integrating ServiceNow record, which is not likely to mean anything to the external system. Here are some guidelines for integrating Reference fields:
    • Consider if there is value in having the Reference data tables stored and maintained wholly in each system, so each is aware of the full dataset and mapping is an easier exercise. (There good reasons to do this, and reasons it’s often either impossible or a bad idea.)
    • Ensure that both systems have a field that uniquely identifies the reference in both systems. For example, for users records, email address may suffice.
    • Ensure that field data is included in the bidirectional payloads
    • Use the “Reference value field name” in your Web Service Import Set Transform Map Entry to use this field to choose the right ServiceNow reference record (using our out of box functionality again!)
    • Set up your Outbound Field Mapper to map the ServiceNow field to System X field, so that the external system doesn’t get the sys_id
    • And for goodness sake don’t try to use display value strings as unique identifiers!
  • I suspect integrations that don’t use REST (or even SOAP) could use the same approach I’ve outlined. Even a file-based export could work the same, save for the nature of the outbound and/or inbound payloads.
  • Wherever possible, the outbound integrations from ServiceNow should be run asynchronously. This is a general good practice with all integrations. For example, if the integration is triggered via a Business Rule, the Business Rule should be set to async if at all possible. This way the end user and the system (UI) do not wait on the integration to move forward, and the integration runs as system resources are available to it. The exception is if there is a business requirement for the system to wait on the integration, e.g. the end user is expecting to get a result back from the external system before proceeding. There are also technical reasons this can be a challenge: For example, you cannot run an async Business Rule on a comment or work note addition.
  • Only use a Scripted Web Service if the inbound payload will not be in a name:value format that can easily map into a staging table, and rather requires scripting logic to manage the payload before injecting it into a ServiceNow record. Consider this a “last resort” in most cases.

Some of these points could warrant their own article; hopefully this article triggers your design thoughts and gives you ideas about how to manage your integrations.

Conclusion

Since its early days ServiceNow has had integration technologies built into, and fundamental to, the platform. Many a system has been integrated into ServiceNow in all shapes and flavors. While all kinds of new tools inside and outside of ServiceNow have attempted to simplify integrations, the “good old” ways still work when no other options exist (or existing options don’t quite fit the bill).


]]>
https://sncma.com/2023/05/09/what-weve-got-here-is-failure-to-communicate-part-2/feed/ 0 962
What We’ve Got Here Is Failure to Communicate – Part 1 https://sncma.com/2023/05/08/what-weve-got-here-is-failure-to-communicate-part-1/ https://sncma.com/2023/05/08/what-weve-got-here-is-failure-to-communicate-part-1/#respond Mon, 08 May 2023 21:52:28 +0000 https://sncma.com/?p=926 Good Practices for Designing Integrations in ServiceNow

Captain: You can have the easy way, Luke… Or you can have it the hard way… It’s all up to you. – Cool Hand Luke

If you work in a ServiceNow environment in 2023, it’s more than likely you’ve got it integrated with other systems. Given ServiceNow’s place in the market, it’s unlikely that an instance is running in an environment small enough or segregated enough to not need to be integrated with other systems. At the very least, you’re likely getting your core data from somewhere outside of ServiceNow, and hopefully not through a manual import. (Who wants to keep up with that effort?) You may be using a “good old” LDAP integration, or you may be using middleware, or an Integration Hub pre-built solution. Regardless of the solution, I’m going to use the rest of this article to talk about good practices for how integrations should be designed in ServiceNow so that your applications, and indeed the platform as a whole, are protected from possible integration chaff, and so they can be easily extended by non-coders when the need arises. I’ll primarily cover custom Web Service integrations, with the intention that if you understand how to design these kinds of integrations the knowledge translates well to all integrations.

In Part 1 of this article, I’ll delve into Inbound and Outbound design considerations, and in Part 2, I’ll cover considerations for a true eBonding type integration as well as other general tips I’ve learned through the years building integrations.

A quick bit of “curriculum vitae” to establish my bona-fides: I’ve been doing ServiceNow integrations since 2011; I was one of the early ServiceNow Professional Services consultants to delve into integration work. I developed one of the first AT&T eBonding integrations and gave the code and configuration to ServiceNow development to leverage as a packaged offering. I also built the ServiceNow side of the Workday to ServiceNow connector for Workday (the company). I’ve focused primarily on SOAP (early on) and REST based Web Service integrations. I also helped build the first iteration of the Perspectium DataSync tool.

Baseline Knowledge

This article assumes the reader has a baseline knowledge of how to do integrations, both in general and in ServiceNow. It also assumes you have knowledge of the various ways that ServiceNow does, or can do, integrations “out of the box”.

The examples in this article are based on Diagram 1. The example is a bi-directional application to application integration and includes the following:

  • The REST protocol with JSON payloads
  • A Web Service Import Set to stage the inbound data
  • An integration with a Task-based application in ServiceNow
  • A field mapping table to manage inbound and outbound data updates
integration
Diagram 1: Example Salesforce Integration Using REST, a Web Service Import Set and a Mapping Table

Inbound: Default to Using a Staging Table

You should always default to staging the inbound data in Web Service Import Sets (WSIS). These are nothing more than Import Set tables with a slightly extended API. (I’ve honestly never needed anything more than the standard API calls when using these tables.) Here are the reasons these tables should be the place to integrate into ServiceNow:

  • Staging the data insulates your application tables from data issues with inbound integrations. This allows you to build both data and logic safeguards into your integration. External systems can use the WSIS Table APIs to inject data into ServiceNow, where it waits to be transformed into application or core-specific data. Transform logic can ensure that bad or malformed data doesn’t make its way into your SN processes, preventing potential SN instance issues.
  • Staged data in WSIS can be transformed like any import set. This means SN administrators who may not be familiar with Web Services or integration design in general can still configure transform maps and transform logic. Many of the future changes to the integration can be handled by anyone who can maintain transform maps.
  • Staged data can be used for troubleshooting integration issues: If there’s an issue with the integration after the inbound request has reached ServiceNow, the import set record serves as an auditable trace of the raw data received. Often issues can be solved by a review of this data, e.g “Hey we agreed you’d send the data in format YYYY-MM-DD and you sent MM-DD-YY.” Clever developers will set up ways to store raw JSON payloads and integration messages (errors etc) in the Import Set record.

Things to note with this approach:

  • For most situations, you’ll need to ensure the integrating system receives the unique identifier of the record created by the transform, not the import set record. Recent versions of ServiceNow’s Import Set API appear to do this inherently in the JSON response.
  • The import set will need to be set up not to use the default ServiceNow system delete property of 7 days if you want to be able to trace issues older than this.

ServiceNow documentation can show you how to achieve both of these.

For folks reading this who have become skeptical because Integration Hub doesn’t take this approach, I learned this approach from ServiceNow employee #1 (those who know, know). My standard tact is to believe those who created the platform over “johnny come latelies”.

Outbound: Create Extensible Field Mappers Instead of Writing Code

While WSIS are standard ServiceNow functionality, this recommendation is my good practice, and one I’ve espoused for all platform development. The goal is to build solutions that write code once, and build configurations that are extendable for future changes – ones that can be managed by non-coders. Think of it like a custom Transform Map for your integration. In the diagram above, this is the “X Request Map” at the middle bottom. In its simplest form, the table contains the following:

  1. The table and field of the application in ServiceNow
  2. The table and field of the integrating system
  3. The nature of the integration: Inbound, Outbound or Bidirectional
  4. An active flag

For #1, you can use the Table Name and Field Name dictionary field types. (The latter is dependent on the former; e.g. choose the Field from the Table selected) (Dictionary types).

Numbers 1 and 2 tell the integration the tables and fields from both systems map to each other. Conceptually, exactly like a Transform map, although one of the table\field combinations in this case is from the external, integrating system. Number 3 is a choice drop-down with options for Inbound, Outbound, and Bi-directional. Finally, an Active flag (Number 4) tells the integration is this a mapping that is currently used.

In Diagram 1, the bottom right side of the image shows how and where this is used. In many integrations I’ve seen, the creation of an outbound request from ServiceNow is done with pure code: the payload is built out via code, and changing the payload requires changed code. My suggested approach is to use Business Rules to trigger the request – when an update to an integrated record occurs, trigger the initiation of an outbound request. I use a Script Include function to build the request, so that it can be called from multiple places. Most importantly, I use the mapping table to determine what fields should be sent, and the field names to get the values from the ServiceNow record. The process flow is:

  1. The integrated ServiceNow record is updated
  2. A Business Rule running on that record’s table determines if the update needs to trigger the outbound integration
  3. The Business Rule code calls a Script Include function, passing it the current GlideRecord
  4. The Script Include function queries the mapping table, filtering on active, type=outbound, table is the current table
  5. The Script Include function loops through the result, pulling the values for the external system fields and the GlideRecord field values, building an outbound name:value pair payload
  6. The Script Include function triggers an outbound REST message and attaches the payload
  7. The Script Include function processes the response as desired

Important Note: Wherever possible, the Business Rules should be run async. There is more on this in part 2.


If this is built correctly, the major benefit is that future updates to the integration can be completed with updates to the Mapping table, rather than with code. A true low-code, no-code solution!

More to come in Part 2.

]]>
https://sncma.com/2023/05/08/what-weve-got-here-is-failure-to-communicate-part-1/feed/ 0 926
Building Core Strength https://sncma.com/2023/02/20/building-core-strength/ https://sncma.com/2023/02/20/building-core-strength/#respond Mon, 20 Feb 2023 01:44:30 +0000 https://sncma.com/?p=856 Why good core data is both the roots and the flowers of your ServiceNow tree

“A tree with a rotten core cannot stand.” — Aleksandr Solzhenitsyn

In the fitness world, and in fact the physical human world, your core is the central part of your body. It includes the pelvis, lower back, hips and stomach. Exercises that build core strength lead to better balance and steadiness, also called stability. Stability is important whether you’re on the playing field or doing regular activities. In fact, most sports and other physical activities depend on stable core muscles.

As ServiceNow has moved further towards being a product company and less a platform company, it’s easy to lose sight of the aspects of the system that are core to its functionality and its value. If you’re solely focused on products*, it’s akin to building big arms and shoulders, and large calves and thighs, but ignoring your back, abs and glutes. Eventually you’ll be a Popeye-ish figure, unable to balance because you’re both disproportionate and “weak in the middle”. In this article, we’ll discuss what’s core to ServiceNow, what benefits having a good core provides, and how to build and maintain this core.

*Products, or Applications and Application Suites, are things like ITSM, CSM, HRSM, ITBM, ITOM, and their components. These are the things ServiceNow builds, markets and licenses on top of the core platform.

What is the ServiceNow Core?

There are several aspects to the core platform. I’ve highlighted some of these in a previous post, primarily the development components that make up all the applications. In this post, I’m going to focus on the core data which in the least drives consolidated reporting, and as I’ll elucidate later, the best gives a full insight into how your business is running.

organization navigation menuFrom a data perspective, the core data are the tables that can be seen in the Organization application in the left navigation menu.

The main tables are:

  • Companies
  • Departments
  • Locations
  • Business Units
  • Cost Centers
  • Users
  • Groups

Note: Vendors and Manufacturers are Companies with particular attributes, not unique tables.

If you look at the schema map for any of these tables, you’ll see how many tables reference these records. For example, the Department table is referenced 746 times in my largely out-of-the-box PDI. Most of these are the CMDB, and indeed, it is hard to use the schema visualization ServiceNow provides because of the number of CMDB tables it needs to draw to represent the schema. If you look at the dictionary references to Department, there are still 36 entries.

However this is just part of the usage of this data. Consider the dot-walking use cases for Department. (If you don’t understand what dot-walking means, please refer to ServiceNow documentation.) Since Department is a field on User record, everywhere a User reference exists, Department can be used by dot-walking to it. Looking at the dictionary references to User, in my PDI there are 784 non-CMDB fields across the platform. So this is 784 places you can inherently filter on or group by Department by dot-walking from the User reference field.

Because the in-platform schema is overwhelmed by the CMDB, I drew a diagram of just how these core tables tie together:

core tables

Note: Depending on your organization, you may not need all these tables populated. Smaller organizations may not distinguish between Departments, Business Units and Cost Centers.

Building and Maintaining Your Core

Experienced system administrators and ServiceNow developers are familiar with these tables and this data. What I’ve often found is there’s an effort during the initial implementation to populate the required tables, then the maintenance is lacking and data becomes stale or messy.

Here are some common examples:

  • The data is imported once and either an integration or an ongoing process for maintaining the data isn’t implemented
  • The company does a re-organization and user departments, cost centers, business units aren’t updated
  • Unique identifiers aren’t determined for the core records and subsequent imports create duplicates
  • Companies treat users like tickets – they just need to be able to login, have the correct roles, and “life is good”

Here’s a small example I ran into recently: A customer had done a series of User imports from other systems without clearly identifying and marking a unique field. An integration was built from Salesforce using the email address as an identifier of a Requester (User) in ServiceNow. An issue was reported after we went live because the Requester was incorrect. The root cause was there were three active user records with the same email address and the system picked the first one sorted by sys_id. This issue was not previously identified because the two bad records weren’t being used by actual users.

In these scenarios, core data quickly becomes utilitarian and not useful for broader service management insight or improvements.

My recommendations for implementing and maintaining good core data are as follows:

  1. Identify sources of truth and system(s) of record for core data. This is an organizational best practice that certainly applies to ServiceNow as well. It’s rare that ServiceNow is or should be the source of truth or system of record for core data, perhaps other than local User Accounts, Groups and Group Memberships. For example, Active Directory is often the system of record for users across the enterprise. As an organization, identify these systems and implement integrations to receive data from these systems of record.
  2. Identify and implement unique identifiers for data records across systems. Akin to my example above, and assuming you’ve done step 1, before importing data from a system of record you need to determine the unique identifier from the source system. Ensure that ServiceNow has this field in the destination table (and import set table), and set up your transform maps or other integration logic to coalesce on this field’s data. This is critical to ensuring duplicate records are not created.
  3. Set up your imports and transforms to ensure core references are populated. You’ll need to order your transformations so that references to core table records on other core tables are populated – see the table diagram above. For example, in order to set the Location on the User record, the Location table needs to be populated first. However, if you want to use the “Contact” field on the Location records, you’ll need the Users in place. The reality is you’ll need to do multiple transformations or scripting to handle this circular logic. (Challenge yourself and try multiple transformations!)
  4. Use the Production (“PROD”) instance as the system of record for core data across ServiceNow instances. Within your ServiceNow environment, PROD should be the system of record for this core data; sub-production environments will get their core data from PROD via clones. You can and should use sub-production for testing core data integrations, but the data itself should come from PROD. This includes Groups and Group Memberships wherever possible, save for a one-off when development requires a new Group that cannot exist in PROD prior to release. (Think about this as you do development – often a Group can be created in PROD without impact to process.) Using PROD as the system of record for this data means you have matching sys_ids of these records across your environments, and references to this data will not break in clones or code and configuration promotions. It is fine and expected to create additional core data records in sub-production instances – test users in your TEST instance for example – but use PROD as your source of truth.

Benefits of a Strong Core

For experienced system administrators and ServiceNow developers who are aware of and/or follow good practices, I haven’t mentioned anything they don’t already know. Sometimes it’s a matter of time and execution rather than knowledge. But what is sometimes not known is why this is important other than having good, clean data in your systems. What the larger benefits from having this data correct and available are.

It is hard to generalize all the benefits into clean, succinct bullet points. What you can do is move past the ideas of “number of tickets open/closed” and SLAs. Here are examples of the use of good core data, and hopefully it will trigger your imagination to think how it might apply to your organization and its business needs:

  • Using the Department and Cost Center data tied to the User References on task-based application records, you can see what organizations are delivering services to what organizations, and use this data for charge-back accounting. For example, IT has completed 500 Incidents for Sales, or HR has fulfilled 300 service requests for Finance. With timekeeping and cost accounting, this data could be used to flow-down into cross-department accounting.
  • Analyze trends of types of Incidents and Service Requests by Location (again by requesting User). This analysis could reveal Incident types that could be converted to Problems that are localized but could be avoided in the future.
  • Group by core data points to determine if certain organizations or locations could benefit from a new or modified service (Incident deflection?)
  • Align new hires and intra-company moves with Department so that standard packages can be pre-ordered (rather than asking each time). For example, the Sales Department employees always get certain access, software and hardware; this could be aligned with the Department so that when a new hire is requested who is part of Sales, the access, software and hardware can be automatically ordered.

For further reading, I did detail an example of what level of service can be provided, reported and analyzed when your core data is complete and used in a previous blog: Tier 5 Service Management.

Hopefully these examples trigger your own ideas about using referential core data to improve insight and improvements to your own organization.

Conclusion

At a cursory level, it is fairly obvious why having good data in ServiceNow is beneficial: clean is always better than messy. However, there’s more benefit than just cleanliness. Having accurate, up-to-date core data can help take your Service Management to “the next level” – both understanding what is occurring in your organization at a deeper level, and being able to make informed judgments about how to deliver Services that maximize benefit and minimize human effort. So start with your roots – good core data – and cultivate the ideas and features that will make your Service Management bloom.

]]>
https://sncma.com/2023/02/20/building-core-strength/feed/ 0 856
Building The Perfect Architect – Part 2 https://sncma.com/2023/02/07/building-the-perfect-architect-part-2/ https://sncma.com/2023/02/07/building-the-perfect-architect-part-2/#respond Tue, 07 Feb 2023 20:08:04 +0000 https://sncma.com/?p=882 What makes a good ServiceNow architect? And what makes “architect” a misnomer?

In part one, we discussed what an architect is within the ServiceNow and the larger IT ecosystems. Now, we’ll delve into design documentation – a key part of an architect’s deliverables, and some of the behaviors of folks who may have the title architect but whose actions belie the title.

Documenting designs and implementations

A good architect understands the value of documentation, and both creates and enforces documentation. My rules of thumb for documentation are:

  • Any custom development that includes creating new tables needs design documentation. This does not include Import Set tables, but does include tables meant for custom lookup functionality, extensions to existing functionality, and new task-based applications.
  • Any development that spans multiple Agile stories likely requires design and implementation documentation. What I mean by this is if the solution being implemented is complex enough to require multiple stories, there should be a centralized design for the solution that can be referenced by Story developers and post-implementation maintainers.
  • Any development whose essential nature cannot be found in existing ServiceNow documentation likely requires documenting. This is typically custom development.
  • Any development that would not be obvious to an experienced ServiceNow admin or developer who walks in “off the street” likely requires documenting. In other words, if an experienced ServiceNow person wouldn’t be able to understand the design of the solution simply by looking at what’s in the platform, then the architect “owes” them design documentation so they can see the thinking behind the design. (I can’t say how many times I’ve come into a situation like this and my only tact is to start reverse engineering code. I find this unacceptable.)

My standard “as-built” documentation for these scenarios includes:

  • The picture of what was designed (aka “Visio”)
  • The narrative of what was designed, and why
  • The elements of what was implemented, and where
  • The narrative of how the solution is to be maintained

For the sake of illustration, a Salesforce integrated application design document make look something like this:

Design Diagram

Sample Functional Solution Design

Design Narration:

The solution uses a custom request application to align with the Salesforce model, and uses Web Service Import Sets to stage the Salesforce data. Within ServiceNow, the same request can be initiated through a Record Producer in the Service Catalog… continue to describe the diagram and how the pieces fit together.

Implemented Elements:

Sample Custom Element Table

Maintenance Notes:

In order to extend the data schema on both systems, add fields to the ServiceNow request and import set tables, and use the transform map functionality to populate the field(s). Extend the custom mapping table to Salesforce so the outbound REST message will pick up the new field from ServiceNow… continue to describe the maintenance and extension of functionality

The goal is not documentation for documentation’s sake. The documentation should be clear and straightforward – think of the person who is seeing the solution for the first time. If it were you, what would you want to see to understand what was built?

You should be able to give the document to an experienced ServiceNow admin or developer and they understand the solution and can maintain it without having to comb through the system and/or reverse engineer it to learn it. And if I’m being perfectly honest, there are plenty of times I have to do this with new ServiceNow created applications or solutions.

Architect Failings, aka When It’s a Misnomer

It’s also helpful to list things that aren’t architecture, or at least don’t fall under the heading of “what makes a (good) architect”. I posit that when someone who has the title of “Architect” does these things, they are not truly an architect.

  • Treating all requirements development as a one-off. In other words, as requirements come in, the named architect only considers the requirement within the scope of itself, and solutions inside of that box. Doing so results in many unrelated one-off solutions within the platform, without a cohesive platform strategy to tie them together. This leads to maintenance challenges, as each change requires modifying a particular element. For example, I was working with a customer who had several hundred catalog items, each with their own workflow. We made a platform change at the Task level that required the modification of the same few lines of code across hundreds of workflows. A good architect somewhere in that development journey should (would) have stopped these one-off developments and designed a solution where the code existed in one place and was called where needed by the workflows. (I would argue a good architect would also consolidate the workflows, but I hope you see my general point.)
  • Implementing all requirements with code. I also refer to this as “write code until it works”. I’ve written about developing solutions that leverage the elements of the platform and maximize configuration over code. An architect understands how this is done, and avoids the common trap of overcoding. Just as important, he or she assures that the development team doesn’t do this either by providing design guidance as I’ve described previously.
  • Pushing products over solutions. There is a balance that must be struck between defaulting to a ServiceNow product as a solution, and developing a custom solution that may more perfectly align directly with business requirements. An architect understands each, and the tradeoffs, can articulate the advantages and disadvantages of each, and guides customers to the correct implementation for their business needs. In my view, those who do not make this distinction, but default to a ServiceNow product based on a few keywords heard in requirements sessions, are better termed as Sales or Solution Architects and not Technical Architects.
  • Limiting solution scope to ServiceNow. When reviewing business needs, if an architect does not consider all possible solutions in the enterprise, but limits their scope to only ServiceNow, they are not performing the duties of a true architect.

Conclusion

A good ServiceNow architect always starts with understanding the business’s short and long term needs, and recommending a solution that aligns with both, regardless of the technology. He or she understands enough about the ServiceNow platform to recommend the correct application or platform technology to meet the need when ServiceNow is the correct solution. And the good architect provides design oversight so that all parties involved in the solution are working toward the correct goal in the correct ways. In short, the best ServiceNow architects have knowledge of, and can work in, all spheres of the ServiceNow and Enterprise IT scopes:

ServiceNow Spheres

Lastly, when the architect doesn’t have the required experience in a particular area, they know enough to seek expertise in that area (rather than “faking it until they make it”). If knowledge is power, knowing what you don’t know is wisdom.

]]>
https://sncma.com/2023/02/07/building-the-perfect-architect-part-2/feed/ 0 882
Building the Perfect Architect – Part 1 https://sncma.com/2023/01/11/building-the-perfect-architect/ https://sncma.com/2023/01/11/building-the-perfect-architect/#respond Wed, 11 Jan 2023 19:52:00 +0000 https://sncma.com/?p=821 What makes a good ServiceNow architect? And what makes “architect” a misnomer?

“Architecture is not an inspirational business, it’s a rational procedure to do sensible and hopefully beautiful things; that’s all.”Harry Seidler

If you’re here and reading this, you probably have a concept of what a ServiceNow architect is. (In this context, “architect” means a ServiceNow technical architect.) And you’ve likely worked with folks who have the title “architect”, whether on implementation projects or as part of a larger IT ecosystem. But what does this mean, and what should it mean? I’ll spend the rest of this article discussing this topic. Having the title Certified Master Architect doesn’t mean I have all the answers, but I’ll posit that having my experience and acumen means I can define it as well as anyone.

In part one, we’re going to discuss what an architect is within the ServiceNow and the larger IT ecosystems. In part two, we’ll delve into design documentation – a key part of an architect’s deliverables, and some of the behaviors of folks who may have the title architect but whose actions belie the title.

It’s helpful to start with two lists that will contextualize the discussion: The roles that are common to a medium to large-sized ServiceNow environment, and the spheres of influence the ServiceNow leaders oversee.

Common ServiceNow Roles

  • System Administrator – hands on configuration and management of the platform
  • Business Analyst – elucidating business requirements
  • Developer – custom solution development
  • Designer – intra and inter application solution design
  • Architect

In many customer environments, folks fill more than one of these roles, and potentially one person wears all the hats in small platform implementations.

ServiceNow Decision Spheres

  • Hands-on development
  • Application design
  • Platform strategy
  • Enterprise strategy

These are the areas that require guidance to be provided and decisions to be made. Much like the roles, the people providing the guidance and making the decisions may cross spheres, but the spheres exist.

These spheres fit into a pyramid picture of ServiceNow within the IT ecosystem.

ServiceNow Spheres

At its most narrow, it helps define development best practices, and at its broadest delineates where and how ServiceNow fits in the overall IT enterprise.

Defining the “Good” Architect

In my experience, and based on feedback from my customers, here’s what makes a good ServiceNow architect, and comparing it to what might be considered architecture but really isn’t:

Considering ServiceNow in the larger customer environment

This is traditionally the purview of the Enterprise Architect in larger organizations. The role takes into account short-term organizational and project needs, and longer-term strategic needs, and evaluates which technology tools are the correct ones to meet both needs. A ServiceNow architect should be able to converse intelligently about these customer needs and convey where ServiceNow can and should fit into meeting those needs. Just as importantly, he or she should also understand and be honest about where ServiceNow may not fit those needs. Between these two, the architect should be able to articulate what ServiceNow can do, the level of effort and cost to implement and maintain in ServiceNow, and help the customer make the correct decisions about using – and not using – ServiceNow.

Understanding the platform separately from products

A good ServiceNow architect understands the platform inherently – I’ve written about this previously. ServiceNow has increasingly moved from being a platform to being a series of products developed on that platform. While there is value in understanding these products, particularly from a business process perspective(*), a good architect can separate the chaff of product marketing from the value of the platform. And more importantly, can align this knowledge with customer requirements.

*For example, understanding how HR processes work, and how these processes are implemented in ServiceNow HR, has value.

Translating customer requirements into the correct platform solutions

Assuming an architect fundamentally understands the ServiceNow platform and understands when ServiceNow should or should not be used in the customer’s overall IT environment, he or she should be able to listen to business requirements and translate into the best way to achieve them in ServiceNow (assuming ServiceNow is the best place to achieve them). What this means is that the architect knows enough about the platform to determine if there is a pre-built solution that best serves the need, or if it is better served by extending platform capabilities to create a customized solution. He or she should also be able to articulate the reasons why to choose one over the other, and both speak to and document the advantages and disadvantages of both, including hard and soft costs.

Knowing and advising on best practices

The ideal architect is knowledgeable enough to not only design the correct solution, but advise on how to implement that solution. This includes advising on technical best practices. For example, when I serve as a technical architect on both projects and ongoing development in an Agile environment, my role and input includes documenting the technical approach on Stories. This is the detail of what and where to develop to satisfy the stated requirements, not necessarily the how. In other words, because there are often 3-4 ways to satisfy a requirement in ServiceNow, the architect should advise on the best way to do it based on their knowledge and experience. This is often beyond the knowledge of a developer or administrator, and beyond what can be found in ServiceNow documentation.

After reading this, you might think “but there’s too much crossover with the roles mentioned at the beginning”. And you may be right. What I’ve found in my ServiceNow journey is that it’s rare that an architect can only fulfill a pure architecture role. Most customer environments aren’t large enough to support this level of “purity” or responsibility delineation. In addition, I’ve found that you cannot provide the correct architecture decisions without some hands-on experience with the platform. Doing so leads to “book” answers without the benefit of real-work experience.

In addition to the more pure architecture I’ve listed, the best architects can also advise on practices for instance management: managing configuration and code updates through the SDLC and provide insight on how to build and maintain to simplify system upgrades (new releases). In short, the architect should be a trusted advisor to IT Management on minimizing cost and maximizing value with ServiceNow.

Coming up in part two, we’ll look at documentation and behaviors that belie the architect title.

]]>
https://sncma.com/2023/01/11/building-the-perfect-architect/feed/ 0 821
Starting Your “EPA” (Environment Proficiency Activities) https://sncma.com/2023/01/02/starting-your-epa-environment-proficiency-activities/ https://sncma.com/2023/01/02/starting-your-epa-environment-proficiency-activities/#respond Mon, 02 Jan 2023 00:06:02 +0000 https://sncma.com/?p=754 Tips for Optimizing your ServiceNow Instance Management

Having been in the ServiceNow business for over 12 years, you begin to take certain things for granted. At least until you see someone doing something that you don’t expect and it makes you say, “hmm, maybe this isn’t common knowledge”. One such area where I’ve noticed this in recent years is the management of an overall ServiceNow environment – that which encompasses all your ServiceNow instances, and the processes and techniques by which configuration, code and data is built, moved and shared between them. In my experience, if done properly, this should be a minor undertaking from a time and manpower perspective. However, I’ve witnessed customers having teams of people and days of effort to manage this. To this end, I’ll share my experiences in how I was able to make this exercise a minimal part of your ServiceNow time within the SDLC.

Ways to maximize your environment management efficiency:

  • Time-box your update sets. This means using one update set per application scope per development period. Why? When developers are working in the same application space, and sometimes even when they are not, they may work on the same element within the development timeframe. If they are working in their own update sets, if they are moved out of order, you will have to resolve conflicts on the destination instance(s). Here’s an example (we’ll use an Agile environment):
    • Developer Bob has a Story to add a field to the Incident form in the main section (0) and creates a Story-based update set to capture his work.
    • Developer Jenny has a Story to remove a field from the Incident form in section 2 and creates a Story-based update set to capture her work.
    • Jenny finishes her work first and demos the change to the business owners. She received feedback that they actually want the field in section 3 now.
    • In the meantime, Bob finishes his work, demos it, and receives sign-off.
    • Jenny makes the updates asked for, demos it, and receives sign-off.
    • Jenny moves her update set to TEST and commits it.
    • The next day, Bob moves his update set to TEST and receives a warning that a newer update exists for the Incident form. He now has to resolve the conflict and hopefully not overwrite the latest version of the form.

    This is obviously a very basic example, but gives you the idea. This could apply to almost any element in the system – Form and List views, Business Rules, Script Includes, Flows and Workflows, etc. In case you’re wondering – and I want to be very clear on this – Application Scoping does not help with this. In this example, both would be working in the same scope, and there are exactly zero reasons why you should ever do scopes by story or developer.

    The last thing I’ll say on this point is that the concept applies whether you are doing project-based work – e.g. deploying a new application or releasing a large set of functional changes – or managing ongoing maintenance and enhancements. Either situation should include time-boxed development cycles if the development organization entity is being run properly.

  • Make your production instance the source of truth for core data. Core data includes Locations, Departments, Vendors, etc; but it also includes Users and Groups. If you make production your source of truth* (* – at least within your ServiceNow environment – I’m not suggesting ServiceNow should be a source of truth for any of this data), you can create and update these records in production and XML export them back to your sub-production environments. In this way, they will always have the same sys_ids and configuration using them will not fail.There may be scenarios where you absolutely cannot have this data in production prior to “going live” with what you are developing. (I’m thinking of a new Assignment Group that cannot show in the pick list until the new development is live.) In this situation, create in your development environment, and document its move to test and production as part of your go-live planning.
  • Make cloning a standard process occurrence. This applies to ongoing environment management rather than a new customer going live on ServiceNow for the first time. As an organization, make it a standard to clone your sub-production environments from production on a regular, scheduled basis. And then actually do it. Here are the benefits:
    • It keeps the environments in sync – what’s been released in production is reflected in sub-production. This makes regression testing easier. It also makes upgrade testing easier – if your upgrade works in sub-production you can have confidence it will work in production.
    • It settles your development lifecycle into a cadence. If your developers get used to wrapping up their work within certain timeframes, and having code freezes in certain timeframes, it will make your processes better align with SLDC standards, and make for a less chaotic development environment overall.

    This is not without challenges. For instance, the clone schedule may coincide with urgent incident resolution or longer-term project-style implementations. We’ll talk about managing and mitigating these a little later.

  • Use the Update source – Remote Instance records to move update sets. ServiceNow has the setup to allow you to connect to other instances and pull all completed Update Sets from that instance that don’t already exist in the destination instance. Contrast this with using the XML Export and Import functionality to do it manually. (I assumed this was common knowledge but I see a lot of developers and admins doing it the “XML” way.)

Update Source Records

Here’s a diagram of the process I used at Workday (as a customer). This is a 4-week view, and is one full cycle of two (2) Sprints and a clone back at the end of the 4-week period. To the earlier points, the update sets were limited to one per Sprint per Application Scope.

Release Clone Cycle

At the end of each Sprint, we used the last 2 days to move code to TEST, do our regression testing, and move to PROD. After the second Sprint and the finalization of code moves, we would use the weekend to clone back to TEST and DEV (one per day). We would also use the code freeze days to complete our documentation and do other system maintenance activities.

A few other notes about moving code and configuration in 2 week Sprint cycles:

  1. Urgent fixes (Incidents) should be managed separately in their own update sets and moved “out of band” – outside of the normal Sprint cycle – in order to maintain a stable environment. (And of course as part of a Change Management process.)
  2. For longer term development projects where the work cannot be completed in 2 weeks, whenever possible incorporate the idea of a modified “walking skeleton” in order to close and move update sets in the normal time frames. This involves moving the code and configuration to production but hiding it until ready for go-live – inactivating the visible components, or making visible to admins only, so end users don’t see it.
  3. For exceptions that cannot be dealt with in any of the previous ways, use the XML export of the update set to back up unfinished work before the clone, then re-import and set back to In progress. Or, move the update set to production and leave in a loaded or previewed state. I haven’t found one to be better than the other, pick one and make it the standard.

I’m just beginning to scratch the surface on some of this, but I hope this gives you a general sense of how you can minimize your environment management efforts. As I say, in my experience, most of the activities using these techniques became rote and took little time in the overall SDLC. If you’re spending more than an hour or two a week, and it takes more than one person, consider some changes.

“Efficiency is doing better what is already being done.”

]]>
https://sncma.com/2023/01/02/starting-your-epa-environment-proficiency-activities/feed/ 0 754
Why Drive a Cadillac? Part 2 https://sncma.com/2022/04/20/why-drive-a-cadillac-part-2/ https://sncma.com/2022/04/20/why-drive-a-cadillac-part-2/#respond Wed, 20 Apr 2022 20:12:14 +0000 https://sncma.com/?p=539 In Part 1, we examined the 5 Tiers of Service Management. This part focuses on the key differences between the tiers. Let’s look at a couple of end-to-end processes through the lenses of tier 1 and 5 to illustrate the differences.

Tiering Examples

1. User in building 3, floor 2, Accounting department reports an issue with accessing SAP. Root cause is a misconfigured switch port.

  • Tier 1: User calls the Help Desk. A support agent opens a support ticket and does some basic troubleshooting – reboots, logs out and in. After 10 minutes, the agent tells the user they need to escalate and ends the call. The agent assigns to a SAP support group. After a few hours, an SAP engineer sees the ticket and begins to ask around if there are any known issues currently. He also confirms he can access SAP and notes no issues in the logging. He IMs local support in building 3 and asks if they can do a remote session. After some troubleshooting, they realize all computers on floor 2 are having trouble accessing some network resources. The ticket is reassigned to Networking. The next morning, the Networking team picks up the ticket and reads through all the notes. They check with the team verbally and via IM to see if any changes went in recently. After a few hours, they realize there was a switch card installed to fix a hardware issue 2 days ago. They begin to troubleshoot and by early afternoon, find the misconfiguration and fix it. They assign the ticket back to the Help Desk and the level one support team checks with the end user that service has been restored before closing the ticket.
  • Tier 5: User submits an incident through the Self Service portal with a high impact, after confirming several members of his team are having the same issue. The incident is marked medium priority and tied to the SAP service and auto-routed to SAP support. SAP support picks up the incident and checks the SAP system for issues, seeing none. They view the CMDB map to check for upstream issues, and see that there are other incidents and a change tied to the network switch in building 3, floor 2. They change the affected CI to the switch and the ticket is auto-reassigned to the networking team, and change the priority to high. The networking team’s SLA-based queue dashboard moves the incident to the top, and the incident is assigned to a level 2 engineer automatically. The engineer, viewing the same CMDB map, makes the incident a child of another incident opened against the same switch. The team sees the change, and finds and fixes the configuration error based on the implementation notes for the Change. The parent is closed and the resolution flows to all children. Service is restored within 2 hours. Management reviews the change and sees a gap in the review process that allowed the issue to occur, and the change templates are changed to prevent the issue in the future.

2. A manager in Sales needs to onboard a new sales rep working remotely

  • Tier 1: The manager emails the Help Desk with the details about the new rep. A support agent opens a support ticket and fills out what was provided in the email, but has to contact the manager to get additional information. After several back-and-forths, the agent begins to email other teams with the ticket information so that they can fulfill the portions they are responsible for, e.g. Procurement to get the hardware and Application Support to set up accounts. The agent has to manage the disparate fulfillment processes through phone, email and IM. Because of different lead times, it takes 8 days until the agent has everything she needs. Meanwhile the rep starts on day 10. The agent sets up the laptop, logs in as the new rep, sets up the applications, and ensures access. She then works with shipping to get the “office setup” sent to the new rep. The new rep receives everything after 13 days.
  • Tier 5: The manager submits an Onboarding request through the Self Service portal, filling out all required information, including choosing available hardware stored in the system by department. The system also contains a master list of applications by department. The system’s fulfillment workflow for Onboarding sees the new rep is part of Sales, orders the Sales laptop automatically from Procurement, and auto-provisions all enterprise-wide and sales specific applications. Procurement uses Asset and Inventory management, and has hardware in-stock for such requests. The hardware is delivered to the Service Desk, and an Agent, who owns the master request, sets up the laptop for the rep and confirms the application access that has already been provisioned. The entire process takes 2 days. The Agent works with shipping to ensure the “office setup” is delivered the day the rep starts their position with the company. The system, based on the request, keeps a record of all hardware and application access received, and uses that data for managing shutoffs and reclamations when the sales rep transfers to another department or leaves the company. Management uses the requester’s and fulfiller’s departments and cost centers to report on how much service is being provided, and to whom, to manage budgets and do chargebacks.

Tier 1 vs. Tier 5

Tier 1 Service Management process and data flows

Tier 1

Characterized by:
  • Lack of self-service
  • Centralized help desk
  • Unrelated tickets
  • Tribal knowledge
  • “Catch as catch can” prioritization and resolution
Tier 5 Service Management process and data flows

Tier 5

Characterized by:
  • Multiple service channels
  • Distributed, specialized support groups
  • Prioritized and correlated ticket queues
  • Synchronization between core data, knowledge and CMDB to align, diagnose and resolve and fulfill
  • Automation, orchestration and integrations to fulfill and synch systems

What I hope is an obvious conclusion from all this is that ServiceNow is able to achieve all these tiers. Many systems that were designed as ticketing systems first can reach the first, second or perhaps third tier, but struggle to achieve the goals of tiers four and five. Because ServiceNow was designed as a platform first, and ticketing, specifically ITSM ticketing was layered on top, it is flexible enough to achieve any desired level. When Service Management is viewed holistically – from a platform perspective and not just a series of unrelated operations applications – it has the ability to help your company become a strategic leader in your industry. (Wow this sounds terribly like marketing – sorry about that!) So why drive a Cadillac? This is why!

Happy transforming!

]]>
https://sncma.com/2022/04/20/why-drive-a-cadillac-part-2/feed/ 0 539
Why Drive a Cadillac? Part 1 https://sncma.com/2022/04/20/why-drive-a-cadillac-part-1/ https://sncma.com/2022/04/20/why-drive-a-cadillac-part-1/#respond Wed, 20 Apr 2022 20:09:29 +0000 https://sncma.com/?p=537 What do you want out of your Service Management?

I had a client recently refer to ServiceNow as a “Cadillac Escalade”, and that they “just needed a Kia”. This is certainly a long way from when I started at ServiceNow in 2010 and the company and the platform was still just emerging from its “gutsy startup” phase. We’ve now reached the point where ServiceNow has become a “gold standard” in cloud platforms and Service Management, and customers are having to decide if they can afford such a high-end solution.

Anyone who has spent time in business realizes that many decisions are made strictly on financial conclusions, or at the least, what is the minimum required solution for the minimum price. While this exercise is black and white, it’s often overly simplistic and short-sighted.

I’d like to use the remainder of this article to talk about the decisions made about using ServiceNow as your Service Management platform: What do you get out of it, what do you want to get out of it, and are you thinking about both in the right way?

Deciding what Service Management is to your business

For much of IT’s history in business environments, upper management’s view of it (and often the whole company’s) is mainly or purely operational: While it automates a previously manual process, it is there simply to “keep the lights on”, and isn’t core to the company strategy. As a subset of IT, Service Management serves the same function: Ensure people can do their work, and speed up the process of getting people what they need where possible.

Only recently have I started to see a shift in philosophy where companies and company decision-makers start to view IT and Service Management strategically: How can this business function actually be leveraged to make us more competitive, increase our margins, allow our people to focus on strategy and not day-to-day repetition. Let’s explore what this means. The following image shows the upward pyramid of Service Management from operational to strategic:

Service Management tiers

Here’s each tier in greater detail:
Tier 1
Organizations have a central mechanism for taking in issues and requests, assigning them, and working them through to completion. Issues and requests are lumped together; customers and fulfillers view everything as a “ticket” to get reported and completed. Processes are manual: the customer calls or emails, queue managers or general fulfillers assign out the work. No SLAs are defined or tracked – work is done “catch as catch can”. Knowledge and solutions are tribal. No CMDB exists or is leveraged for ticket alignment. No attempt to recognize patterns or recurrences. Reporting is basic and often compiled and manipulated “just in time”.

Tier 2

The central mechanism begins to delineate between types of requests, and defining SLAs around each. For example, an issue preventing a person or persons from doing their job is prioritized over a request for a new software installation. Customers have other interfaces to submit and view their issues and requests – portal, mobile apps, etc. Fulfiller queues become better defined and therefore easier to assign. Management starts to standardize reporting and it becomes more real-time in nature. Knowledge begins to be documented. A CMDB is started for aligning issues and requests with physical and logical infrastructure. Fulfillers begin to recognize and report patterns.

Tier 3

Based on information in submissions, some routing is automated, bypassing the initial queue. SLAs begin to be used to measure performance and enforce accountability. The customer experience becomes both broader and more intuitive – more information and paths to get service exist, but the interfaces are easy to navigate and the correct information is presented at the correct place and time. The CMDB becomes core to understanding and analyzing the service environment – where are issues arising and how is it tied to the broader environment? Reporting starts to move up and down the enterprise hierarchy and is not just limited to single service areas or tiers. Knowledge management processes become defined to ensure knowledge is relevant and up-to-date.

Tier 4

The service desk becomes de-centralized as the CMDB and other mechanisms define and route tickets to the correct initial fulfillment groups (bypassing the generic “service desk” or “help desk”). Correct and available knowledge begins to deflect ticket creation as customers are able to self-serve. Other tickets, particularly requests, are auto-fulfilled (require no human interaction) with orchestration. CMDB relationships are defined allowing fulfillers to see impact across services and the enterprise infrastructure.

Tier 5

All ticket types are defined based on the information presented and the service required – the “right tool to get the job done”. SLAs are defined by type and measured, viewed and accountability exists for meeting them. Almost every submission is routed to a minimum level 2 support group based on the information provided. Fulfillment flows are defined and automation/orchestration is used wherever possible, and fulfillment flows are not implemented without exploring and understanding what automation is possible. CMDB and core data drive the routing, the fulfillment flows and the reporting. Fulfillers leverage the CMDB relationships to understand issues and change impacts across the enterprise. Reporting can be done from the fulfillment teams and ticket type level all the way up to the enterprise, based on available core data. All opportunities for self service and ticket deflection mechanisms are implemented and exposed to the customer. Knowledge is cultivated, managed, standardized and available to the correct audiences. The service management system becomes a system of record for all services delivered and aligns with systems of truth for devices, licensing and access.

Where do you want to be in this tiering?

As stated earlier, it’s much easier to do a cost-benefit analysis of Tier 1. But the real ROI comes from leveling up towards Tier 5. ServiceNow’s flexibility allows you to reach any of these tiers, so a goal of Tier 5 is very attainable. When your employees are no longer focused on day-to-day operational concerns, but rather focus their energies on business improvement, strategic initiatives and outside the box thinking, your business becomes a leader.

In Part 2, we’ll examine the key differences between Tier 1 and Tier 5.

]]>
https://sncma.com/2022/04/20/why-drive-a-cadillac-part-1/feed/ 0 537
A Good Leader Fears the Citizenry https://sncma.com/2021/12/02/a-good-leader-fears-the-citizenry/ https://sncma.com/2021/12/02/a-good-leader-fears-the-citizenry/#respond Thu, 02 Dec 2021 02:10:28 +0000 https://sncma.com/?p=250 “a camel is a horse designed by a committee” – Proverb

Over the past few years, there’s been a movement in the business world to espouse the concept of “citizen development”. Gartner defines a citizen developer in this way:

A citizen developer is an employee who creates application capabilities for consumption by themselves or others, using tools that are not actively forbidden by IT or business units. A citizen developer is a persona, not a title or targeted role. They report to a business unit or function other than IT.

Gartner: Definition of Citizen Developer

ServiceNow has espoused this concept in its marketing materials. You’ll see any number of websites, references, articles and online sessions talking about the need for citizen development and how ServiceNow supports it. From their website:

ServiceNow offers a series of citizen development tools, from low code to no code.
  • APP Engine Studio and Templates
  • Flow Designer
  • Process Automation Designer
  • Integration Hub
  • Virtual Agent
  • Predictive Intelligence
  • Performance Analytics

ServiceNow: What is Citizen Developer

For the purposes of this article, I’m going to assume that citizen development is marked by these important aspects:

  • The development does not involve writing code, particularly logic constructs
  • The development does involve GUI elements that allows drag and drop construction
  • The developer does not need to be concerned with version control or other development processes
  • The development does not involve environment promotions or other release management aspects

Based on the blurb above, I’m going to assume that citizen development in ServiceNow (the platform), as considered by ServiceNow (the company), can involve any or all of the following:

  • Flows using Flow Designer
  • Table creation
  • Data elements (fields)
  • Data dictionary attributes
  • Form builds
  • UI Policies
  • Catalog Items

I’m sure I’m leaving quite a bit out, but I wanted to hit the “big stuff”. To me, this list encompasses the things Fred Luddy envisioned when the platform was first devised and built in 2004 (Tables, Fields, Forms, Policies) and the current marketing push around Flows to control all business logic.

Ultimately, what I think ServiceNow is envisioning is the ability to take a whiteboard or Visio-type flow and turn it into actual configuration in the platform, particularly by folks who aren’t developers by trade. This is certainly the Flow aspect. Additionally, the “Fred stuff” included being able to develop new data entry forms and business logic around them to digitize what was previously done on paper, email or spreadsheets. (Really, when you boil down ServiceNow to its core value, this is it.) Also done by “non-developers”.

Here’s the fundamental flaw in this thinking: This level of development democracy assumes the guardrails are big enough and strong enough to prevent the introduction of flaws into the system. These are simple flaws – inconsistency in form design or label nomenclature, and large flaws – applications that don’t fit platform architecture standards set by either ServiceNow or the platform owner(s). So what are the guardrails? ServiceNow will tell you Application Scoping, which silos application elements by namespace and security. However, if you are a platform owner – responsible for the overall health of the entire system – what good are silos if every silo contains a giant mess? It reminds me of the closet scene from The Marx Brothers “A Night at the Opera”.

Night at the Opera
Night at the Opera

I certainly have not seen cost-of-maintenance studies done on this, but I have a hard time imagining that the cost of “cleaning up” and/or maintaining the applications developed by non-developers is less than development that uses an architecture, a methodology, and developers.

Consider this analogy: If you were going to try and develop a cogent political philosophy, would it be wiser and easier to read the missives of a few political experts across a range of theories, or turn to social media and attempt to cull the same from it? I often think of social media as citizen development. It’s the “everyone contributes and we hope something cohesive comes out of it” mentality. If this has worked, I’ve yet to see it happen.

I think of another analogy from my college days: In my Operational Management class, specifically the time focused on lean manufacturing, my professor told a story of visiting various auto manufacturing plants. When he visited the Mercedes-Benz factory, he told of a team of highly paid, highly skilled engineers who sat at the end of the assembly line and reviewed and fixed every defect in the car before it passed inspection. So when a consumer buys a Mercedes-Benz, they know for an expensive price, they are getting a quality car. (We’ll leave the cost of engineering and parts out of the equation for now.) Conversely, when he visited a Toyota factory, he saw that every line worker had an alarm they could ring and stop the assembly line whenever they noticed a potential defect caused by anything in the system – process, parts, etc. The lean manufacturing methodology accepts upfront costs of line stoppages in order to eliminate the possibility of downstream defects and the cost of correction. The moral of the story is each way results in a quality product, but the way to get there is completely different.

To come full circle, I see citizen development as choosing the Mercedes-Benz way – where the acceptance of inherent defects means choosing your developers’ efforts will be on the back-end. I say this because, in my experienced opinion, I’ve not seen where the result of citizen development is clean, designed to platform standards, defect-free, and simply able to be deployed with no concern. Conceptually, there likely are ways that putting up enough guardrails is possible, but I don’t think it’s possible in the ServiceNow platform today. Specifically, here are the things required for development, or related to citizen development in ServiceNow, that are outside of the abilities of non-ServiceNow developer:

  • Understanding how an application should be built given a platform architecture strategy.
  • Where a new table should live in the database structure.
  • Deciding what form views should exist and whether they should be forced.
  • Any management of record security.
  • Any coding. If you’ve worked with ServiceNow, you know there are plenty of business logic aspects that cannot be implemented without some code.
  • Troubleshooting Flow Designer bugs. As of Quebec, Flow Designer is still buggy and simply doesn’t work as expected.
  • Working with variables on Catalog Items. Since Catalog Items are a big aspect of the ServiceNow citizen developer push, it implies that a citizen developer knows how to define the correct variable types, when to use variable sets, how to do advanced reference qualifiers, and all the other “developer-y” stuff related to variables.

As I said many years ago at a Knowledge conference when the concept of citizen developer was introduced, “Ask a citizen developer when they should or should not extend the Task table.” I’d love to be proven wrong on this, but I’m still waiting.

Happy developing!

 

]]>
https://sncma.com/2021/12/02/a-good-leader-fears-the-citizenry/feed/ 0 250