Configuration – SNCMA https://sncma.com A ServiceNow Architecture Blog Wed, 12 Jun 2024 00:53:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://i0.wp.com/sncma.com/wp-content/uploads/2021/06/cropped-gear.png?fit=32%2C32&ssl=1 Configuration – SNCMA https://sncma.com 32 32 194767795 Wherefore Architecture? https://sncma.com/2024/06/12/wherefore-architecture/ https://sncma.com/2024/06/12/wherefore-architecture/#respond Wed, 12 Jun 2024 00:12:27 +0000 https://sncma.com/?p=1066 If ServiceNow is built to support Citizen developers, why do we need ServiceNow architects?

“Thinking about design is hard, but not thinking about it can be disastrous.” – Ralph Caplan

Introduction

For almost 14 years in the ServiceNow space, and across a rapid expansion of the exosystem, it has been interesting to observe and analyze various organization’s approaches to developing and maintaining their ServiceNow environment. Specifically, how do organizations manage the inflow of business needs, the distribution and velocity of development, configuration and administrative work, and the ongoing maintenance of the platform? As the footprint of ServiceNow has expanded conjunctionally with the expansion of the business functions the platform supports, I increasingly see divergence in these strategies. I’m writing this article to articulate these strategies and provide my view of how each succeeds and fails, along with my recommendations for the correct strategy given a company’s view of ServiceNow.

Management Approaches

There really are two ends of the spectrum for how ServiceNow environments are managed. At one end is the idealized view and what ServiceNow itself espouses: Use of Idea and Demand Management to receive and vet business requirements, an oversight board – which ServiceNow calls a “Center of Excellence” – who does the vetting and prioritization of these requirements, Agile and PPM to manage the development work to fulfill these requirements, and an operational organization that handles release management, break/fix work, upgrades, and performance and security of the platform. If you’ve got your thinking cap on while reading this, you’ll quickly sense that this is intended and works best in the largest ServiceNow implementations, where the platform is a large part of an overall enterprise strategy for a company.

The other end of the spectrum stems largely from traditional IT functions; that is, a purely operational model and mindset where all development is treated as one-off break/fix/enhance type work. Where “keep the lights on” is the primary and sometimes only strategy. In these organizations, ServiceNow development work is typically handled through Incident and/or Enhancement processes, and each task is designed, developed and released “in a silo”, usually without thought to larger strategic initiatives. In other words, the view of the development does not extend beyond the scope of the need elucidated.

With a 25 year career in IT, I’m certainly aware of and sympathetic to this mindset. I find it particularly prevalent in MSP or MSP-like organizations. It’s not that the people running these organizations intend to be “unstrategic” (not a word), it’s what they know. These mindsets are built over years and decades of running IT as an operational entity.

There is a cost to doing business this way – and this is the crux of this article. When you implement under an operational mindset, you necessarily build everything as one-off. Critically, there are no design or architecture considerations taken into account, which means there are concerns for platform maintenance, stability, health and optimization. These can range from the simplest quirks like inconsistent forms and fields, and re-creations of code logic, to large-scale issues with performance and user experience.

Examples

Here are some specific examples of development done without design or architecture prior to “hands-on” work:

  • A developer customizes an out of box ServiceNow application when a custom application would have served the requirement better. This leads to upgrade issues.
  • A developer builds a security requirement using client-side functionality, which is pseudo-security. This security hole is exposed when using integrations and server-side functionality to the same data.
  • A series of requirements for a single application are developed as one-offs. After these are implemented, the UI/UX experience is compromised, as now the form design is cluttered and out of sync with other applications. Adding UI logic and many related lists hinders the performance of form loads.
  • One developer uses Workflow, another Flow Designer, another a series of Business Rules, and another a series of Glide Ajax client scripts, all to implement similar requirements. Maintenance becomes hyper complex as each change or fix is a one-off “head scratcher”; “Now where is this functionality??”

I can argue that Agile is a contributor to this problem. Not the methodology itself, but the incomplete usage of the methodology. I often see organizations going no further than Stories to manage their work. While Stories done correctly are ideal for “state the business requirement, not the technical how” of documentation, without using Epics to group requirements into a cohesive release, and more importantly, without architectural design overseeing the Epic, the Stories are developed in silos and lead to the issues noted above.

Best Practice

In my experience, the best practice is to have an architectural or design review of all requirements prior to development. Some may only need a cursory review to confirm it can be built as a one-off. Others may need a directed technical approach because there are several ways it could be built, and a consistent approach platform-wide is best for long term maintainability. And some may need a complete analysis of pros and cons, buy versus build, custom application versus out-of-box in order to build the right solution for the business need and the platform sustainability.

I’ve included a diagram below that shows the “idealized” process flow, including a Center of Excellence that fulfills this function:

Center of Excellence

The concept of a Center of Excellence, or at least some level of architectural oversight, is not meant to be onerous or a process bottleneck. This is a common concern organizations have, and a reason they don’t do it. My argument is that the operational sweet spot for such a function lies somewhere in the middle of the spectrum: Organizations are free to be as fast and independent with business requirements as they choose. The oversight part of the process is a simple architectural design review of all Stories (requirements) prior to development, with the end result a proper technical approach to development. A good architect will quickly recognize when there are multiple approaches to implement a requirement and provide guidance on the correct approach, taking into consideration all aspects mentioned previously. If the Agile methodology is being used, this can be part of the grooming process.

The diagram above is one I drew for where the Center of Excellence lives in the overall development process, between the requirements gathering, and the execution, either as operational one-offs or as larger project-type initiatives.

ServiceNow’s Documentation on Center of Excellence

In the end, it comes down to an organizational decision, even if not made consciously: “Do we spend the time up front to ensure things are developed in a cohesive platform strategy way, or do we dedicate more time, money and resources to fixing issues when they (inevitably) rise in Production?”  The simple analogy is working to prevent forest fires, or dedicating resources to fighting forest fires.

]]>
https://sncma.com/2024/06/12/wherefore-architecture/feed/ 0 1066
It’s the Platform, Stupid* (Part 2) https://sncma.com/2024/02/20/its-the-platform-stupid-part-2/ https://sncma.com/2024/02/20/its-the-platform-stupid-part-2/#respond Tue, 20 Feb 2024 19:38:22 +0000 https://sncma.com/?p=1011 * – A play on the famous James Carville quote about the economy, not implying that ServiceNow folks are stupid

It’s been a few years since I wrote Part 1 of this article, going through the history and evolution of the ServiceNow platform, and the morphing of the company strategy from platform to product. After working with multiple clients in the meantime, and reading lots of new marketing and going through many platform release upgrades, I thought it time to revisit the subject with new perspective and analysis.

A quick recap: In the early 2000s, ServiceNow (nee “Glide”) was envisioned and built as an extensible business workflow platform, designed to replace paper and manual processes, but without a defined business application built-in. The idea was that businesses would analyze their own processes and automate using the platform components. Once this didn’t catch on, the SN founders built an application suite on top of the platform using what they knew – ITSM. This caught on, and in the subsequent years both customers and ServiceNow have used the extensibility of the platform to build and solve countless business processes problems. As ServiceNow itself has gone public and had multiple leadership changes, the company has shifted development, sales and marketing focus to products it builds on top of its own platform. This is why most discussions around new releases are around Products, and not platform enhancements.

platform building

While this all may be natural progression for a company that goes public and has to answer to the markets and short-term financial interests, it poses some issues for those attempting to use ServiceNow as a platform rather than a series of products.

Buy versus Build

In the early days of ServiceNow, the process for implementing a business process solution was generally straightforward (other than specific ITSM processes, which were built in). As a consultant, you would listen to the business problem that needed solving, then design and implement a custom* solution using the platform components provided (see Part 1 for more detail). There wasn’t a longer discussion or decision required for build versus buy, since the platform was designed to be built upon. ServiceNow provided the components to build the tools (applications) to solve business problems, and the licensing was based on fulfiller versus requester. There was no further buy versus build decision to be made.

*NOTE: Although this could spawn an entirely separate discussion, I want to point out that in the ServiceNow world, “custom” is not a bad word, though it is often seen as such.  In reality, building a new application using the platform components ServiceNow provides is doing exactly as the founders intended.  It is also not really “custom” in the true sense of the word. It is simply a new way of using the components provided.

Simplification by Obfuscation?

The nature of strategizing control over flexibility means you take the power of the platform out of the hands of those who are best equipped to take advantage of it. This has been true long before ServiceNow and will continue to be true long after, but I believe it is exacerbated in the ServiceNow space by the factors previously mentioned: market forces, management changes, market strategies, misinformation and misconstrued information. Over the years as ServiceNow has moved from a ticketing system to a strategic platform for companies, I’ve watched as C-level executives have injected common phrases like “stay out of the box” and “minimize upgrade efforts” into the lexicon. I can only assume these come from history with other platforms and reading industry studies, rather than from deep knowledge of what this really means for their ServiceNow implementation. I also assume because those who are making the financial decisions are saying these things, that they become both gospel and strategy for those who have a vested interest in their decision-making.

I liken what ServiceNow has done to the platform to using WordPress for website development, rather than DreamWeaver. The latter is a framework that gives you pre-built components that experienced developers can use to build custom websites faster. The former is more for non-developers to implement fully pre-built websites with a small to moderate ability to make configuration changes. But for an experienced website developer in certain circumstances working with WordPress can be more challenging because things that could easily be modified in CSS aren’t always accessible. In this way, WordPress makes it easy to deploy a website that fits their model, but makes it far more difficult to make what are often necessary changes after the fact.

Business Example:

Here’s an example of what I’ve been describing:

Business Requirement: A need to manage company events such as luncheons, meetings and guest visits. The company wants to use their ServiceNow investment and the platform tasking and workflows to do so.

business requirements

Platform Solution: A ServiceNow architect uses the Task application model to extend to a new Task Type called “Event”, creates a Record Producer to intake customer needs (available in the Service Catalog), builds sub-tasking records and initiates them with a workflow based on state changes to the parent Event. Form and list views, notifications and reports are configured to meet business needs. Security for the new application is layered on as needed. Any specific business requirement can be implemented without concern for “breaking” out of box solutions, and is completely upgrade-proof (ServiceNow doesn’t care about net new applications and components – they are completely ignored in upgrades).

“Out of the Box” Solution: The customer ServiceNow team is told to stay “out of the box” and so attempts to build the solution in Service Request with a Catalog Item for intake. The Event data takes the form of many variables on the Requested Item. The workflow is driven off of variables, and Catalog Tasks are initiated by the workflow. The ServiceNow team has to customize the Request and Task forms for Event needs, creating maintenance issues – the application looks and functions one way for “normal” Service Requests, and a different way for Event Requests. Forms, lists, reports, notifications, security are all doubled with mutually exclusive conditions. Subsequent implementations like this use case in Service Request further complicate the configurations and maintenance.

Product Solution: A ServiceNow Account Manager hears “buzzwords” from the customer regarding their business needs and finds an out of the box product to license them. The customer installs the new product and demos for the business. Stakeholders find that the product only partially aligns with their needs. The business has to make a decision to either customize the ServiceNow product for their needs, or go through a rigorous and costly OCM cycle to change the way their business works to fit the ServiceNow product. If choosing the former, the company loses the ongoing maintenance benefits of staying “out of the box”, while still paying new licensing charges. Anyone who has worked in corporations knows the latter requires an incredible sales job to accomplish – businesses DO NOT like to change!

If you’re intuiting the conclusion I’m reaching with this example, you realize the irony is that what most would call the “custom” solution is actually the solution with the least development friction, the least technical debt, and the least upgrade concerns.

Conclusion

We’ve reached a concerning level of misinformation and mischaracterization of design and development decisions as ServiceNow has changed both their platform focus and marketing strategy. But what ServiceNow cannot change is the fundamental nature of the platform any product they build and market is based upon. Those architects and developers who truly understand this fundamental nature are much better equipped to deliver real value to their customers via shorter development cycles, maintainability, and upgradeability. Remember: “custom” is not a bad word!

]]>
https://sncma.com/2024/02/20/its-the-platform-stupid-part-2/feed/ 0 1011
What We’ve Got Here Is Failure to Communicate – Part 2 https://sncma.com/2023/05/09/what-weve-got-here-is-failure-to-communicate-part-2/ https://sncma.com/2023/05/09/what-weve-got-here-is-failure-to-communicate-part-2/#respond Tue, 09 May 2023 21:39:37 +0000 https://sncma.com/?p=962 In Part 1 of this article, I delved into Inbound and Outbound design considerations. Now, in Part 2, I’ll cover considerations for a true eBonding type integration as well as other general tips I’ve learned through the years building integrations.

eBonding Design Considerations and Good Practices

As mentioned previously, the example I’m working from is a bi-directional application to application integration, meaning that the systems are integrating application records throughout the lifecycle of that application’s workflow. For example, an Incident in system X that integrates with a ServiceNow Incident and exchanges updates throughout the life of both incidents, regardless of who has ownership of the resolution. Many know this concept as “eBonding”. Simply put, this is integration of both data and process, where what data is exchanged, and when, are as a result of process and may also influence process.

The technical designs I’ve outlined above work very well for eBonding, and are in fact designed to work with this practice. In addition to the technical aspects, here are other considerations when designing an integration solution for eBonding:

  • Both systems have to agree on the field mappings and data types. (No different than any other integration.)
  • Both systems have to agree when mapped fields can be updated. This is especially important for things like the ServiceNow “state” field, which either controls or is controlled by workflow. In our Incident example, often the only states that are allowed to be set by the integration are canceled or resolved. Other states may change in the other system but aren’t automatically updated by the integration as it may affect workflows, SLAs, etc. Rather, information may be included in a work note so each system is aware of activity in the other, but the process is not potentially adversely affected by it.
  • The integration needs to include mapping translations for field values that don’t match in usage across the systems. For example, if ServiceNow uses Priorities 1-4 and System X uses Severity 1-10, you’ll need to create a mapping matrix to map System X’s Severities into ServiceNow’s Priorities, and vice versa. (Also consider States, Categories, etc.)
  • You’ll need to consider how Reference fields get populated and integrated, but I’ll discuss that more in the Good Practice Tips.

I’m including diagrams from the AT&T Incident eBonding I built for ServiceNow below. It details the integration flow for two scenarios: A “Proactive” incident initiated by AT&T, and a “Reactive” incident sent to AT&T. In both scenarios AT&T is the owner of the incident – responsible for the resolution, as the use case is AT&T owns the customer’s network and in the incident is network related.

Note the listing of updateable fields, and when, as well as uni and bidirectional flows of data.

Proactive Ticket
Diagram 2: Example eBond flow for an AT&T initiated Incident into ServiceNow
Reactive Ticket
Diagram 2: Example eBond flow for an AT&T Incident initiated in and by ServiceNow

The keys to a successful eBonding integration are the discussion of, and agreement on, the data what and when that will flow between the systems, and the rigorous test planning and testing of all lifecycle scenarios. These are vital to ensure you don’t break existing internal processes already developed and running in your ServiceNow environment.

Other Good Practice Tips

In addition to the primary design considerations outlined above, I recommend the following:

  • While security is of the utmost importance, and is often the thing customers think about first, try to design and build your integration without the security layer, or use the most basic security possible. This allows you to prove out the design and confirm the connectivity first, and assumes you have sub-production environments to develop and test in. Security can almost always be layered in as a second step. This eliminates a layer to troubleshoot as you iterate your development.
  • You’ll need to consider and account for integrating ServiceNow Reference fields *. As you know, these are fields that are stored as sys_ids in the integrating ServiceNow record, which is not likely to mean anything to the external system. Here are some guidelines for integrating Reference fields:
    • Consider if there is value in having the Reference data tables stored and maintained wholly in each system, so each is aware of the full dataset and mapping is an easier exercise. (There good reasons to do this, and reasons it’s often either impossible or a bad idea.)
    • Ensure that both systems have a field that uniquely identifies the reference in both systems. For example, for users records, email address may suffice.
    • Ensure that field data is included in the bidirectional payloads
    • Use the “Reference value field name” in your Web Service Import Set Transform Map Entry to use this field to choose the right ServiceNow reference record (using our out of box functionality again!)
    • Set up your Outbound Field Mapper to map the ServiceNow field to System X field, so that the external system doesn’t get the sys_id
    • And for goodness sake don’t try to use display value strings as unique identifiers!
  • I suspect integrations that don’t use REST (or even SOAP) could use the same approach I’ve outlined. Even a file-based export could work the same, save for the nature of the outbound and/or inbound payloads.
  • Wherever possible, the outbound integrations from ServiceNow should be run asynchronously. This is a general good practice with all integrations. For example, if the integration is triggered via a Business Rule, the Business Rule should be set to async if at all possible. This way the end user and the system (UI) do not wait on the integration to move forward, and the integration runs as system resources are available to it. The exception is if there is a business requirement for the system to wait on the integration, e.g. the end user is expecting to get a result back from the external system before proceeding. There are also technical reasons this can be a challenge: For example, you cannot run an async Business Rule on a comment or work note addition.
  • Only use a Scripted Web Service if the inbound payload will not be in a name:value format that can easily map into a staging table, and rather requires scripting logic to manage the payload before injecting it into a ServiceNow record. Consider this a “last resort” in most cases.

Some of these points could warrant their own article; hopefully this article triggers your design thoughts and gives you ideas about how to manage your integrations.

Conclusion

Since its early days ServiceNow has had integration technologies built into, and fundamental to, the platform. Many a system has been integrated into ServiceNow in all shapes and flavors. While all kinds of new tools inside and outside of ServiceNow have attempted to simplify integrations, the “good old” ways still work when no other options exist (or existing options don’t quite fit the bill).


]]>
https://sncma.com/2023/05/09/what-weve-got-here-is-failure-to-communicate-part-2/feed/ 0 962
What We’ve Got Here Is Failure to Communicate – Part 1 https://sncma.com/2023/05/08/what-weve-got-here-is-failure-to-communicate-part-1/ https://sncma.com/2023/05/08/what-weve-got-here-is-failure-to-communicate-part-1/#respond Mon, 08 May 2023 21:52:28 +0000 https://sncma.com/?p=926 Good Practices for Designing Integrations in ServiceNow

Captain: You can have the easy way, Luke… Or you can have it the hard way… It’s all up to you. – Cool Hand Luke

If you work in a ServiceNow environment in 2023, it’s more than likely you’ve got it integrated with other systems. Given ServiceNow’s place in the market, it’s unlikely that an instance is running in an environment small enough or segregated enough to not need to be integrated with other systems. At the very least, you’re likely getting your core data from somewhere outside of ServiceNow, and hopefully not through a manual import. (Who wants to keep up with that effort?) You may be using a “good old” LDAP integration, or you may be using middleware, or an Integration Hub pre-built solution. Regardless of the solution, I’m going to use the rest of this article to talk about good practices for how integrations should be designed in ServiceNow so that your applications, and indeed the platform as a whole, are protected from possible integration chaff, and so they can be easily extended by non-coders when the need arises. I’ll primarily cover custom Web Service integrations, with the intention that if you understand how to design these kinds of integrations the knowledge translates well to all integrations.

In Part 1 of this article, I’ll delve into Inbound and Outbound design considerations, and in Part 2, I’ll cover considerations for a true eBonding type integration as well as other general tips I’ve learned through the years building integrations.

A quick bit of “curriculum vitae” to establish my bona-fides: I’ve been doing ServiceNow integrations since 2011; I was one of the early ServiceNow Professional Services consultants to delve into integration work. I developed one of the first AT&T eBonding integrations and gave the code and configuration to ServiceNow development to leverage as a packaged offering. I also built the ServiceNow side of the Workday to ServiceNow connector for Workday (the company). I’ve focused primarily on SOAP (early on) and REST based Web Service integrations. I also helped build the first iteration of the Perspectium DataSync tool.

Baseline Knowledge

This article assumes the reader has a baseline knowledge of how to do integrations, both in general and in ServiceNow. It also assumes you have knowledge of the various ways that ServiceNow does, or can do, integrations “out of the box”.

The examples in this article are based on Diagram 1. The example is a bi-directional application to application integration and includes the following:

  • The REST protocol with JSON payloads
  • A Web Service Import Set to stage the inbound data
  • An integration with a Task-based application in ServiceNow
  • A field mapping table to manage inbound and outbound data updates
integration
Diagram 1: Example Salesforce Integration Using REST, a Web Service Import Set and a Mapping Table

Inbound: Default to Using a Staging Table

You should always default to staging the inbound data in Web Service Import Sets (WSIS). These are nothing more than Import Set tables with a slightly extended API. (I’ve honestly never needed anything more than the standard API calls when using these tables.) Here are the reasons these tables should be the place to integrate into ServiceNow:

  • Staging the data insulates your application tables from data issues with inbound integrations. This allows you to build both data and logic safeguards into your integration. External systems can use the WSIS Table APIs to inject data into ServiceNow, where it waits to be transformed into application or core-specific data. Transform logic can ensure that bad or malformed data doesn’t make its way into your SN processes, preventing potential SN instance issues.
  • Staged data in WSIS can be transformed like any import set. This means SN administrators who may not be familiar with Web Services or integration design in general can still configure transform maps and transform logic. Many of the future changes to the integration can be handled by anyone who can maintain transform maps.
  • Staged data can be used for troubleshooting integration issues: If there’s an issue with the integration after the inbound request has reached ServiceNow, the import set record serves as an auditable trace of the raw data received. Often issues can be solved by a review of this data, e.g “Hey we agreed you’d send the data in format YYYY-MM-DD and you sent MM-DD-YY.” Clever developers will set up ways to store raw JSON payloads and integration messages (errors etc) in the Import Set record.

Things to note with this approach:

  • For most situations, you’ll need to ensure the integrating system receives the unique identifier of the record created by the transform, not the import set record. Recent versions of ServiceNow’s Import Set API appear to do this inherently in the JSON response.
  • The import set will need to be set up not to use the default ServiceNow system delete property of 7 days if you want to be able to trace issues older than this.

ServiceNow documentation can show you how to achieve both of these.

For folks reading this who have become skeptical because Integration Hub doesn’t take this approach, I learned this approach from ServiceNow employee #1 (those who know, know). My standard tact is to believe those who created the platform over “johnny come latelies”.

Outbound: Create Extensible Field Mappers Instead of Writing Code

While WSIS are standard ServiceNow functionality, this recommendation is my good practice, and one I’ve espoused for all platform development. The goal is to build solutions that write code once, and build configurations that are extendable for future changes – ones that can be managed by non-coders. Think of it like a custom Transform Map for your integration. In the diagram above, this is the “X Request Map” at the middle bottom. In its simplest form, the table contains the following:

  1. The table and field of the application in ServiceNow
  2. The table and field of the integrating system
  3. The nature of the integration: Inbound, Outbound or Bidirectional
  4. An active flag

For #1, you can use the Table Name and Field Name dictionary field types. (The latter is dependent on the former; e.g. choose the Field from the Table selected) (Dictionary types).

Numbers 1 and 2 tell the integration the tables and fields from both systems map to each other. Conceptually, exactly like a Transform map, although one of the table\field combinations in this case is from the external, integrating system. Number 3 is a choice drop-down with options for Inbound, Outbound, and Bi-directional. Finally, an Active flag (Number 4) tells the integration is this a mapping that is currently used.

In Diagram 1, the bottom right side of the image shows how and where this is used. In many integrations I’ve seen, the creation of an outbound request from ServiceNow is done with pure code: the payload is built out via code, and changing the payload requires changed code. My suggested approach is to use Business Rules to trigger the request – when an update to an integrated record occurs, trigger the initiation of an outbound request. I use a Script Include function to build the request, so that it can be called from multiple places. Most importantly, I use the mapping table to determine what fields should be sent, and the field names to get the values from the ServiceNow record. The process flow is:

  1. The integrated ServiceNow record is updated
  2. A Business Rule running on that record’s table determines if the update needs to trigger the outbound integration
  3. The Business Rule code calls a Script Include function, passing it the current GlideRecord
  4. The Script Include function queries the mapping table, filtering on active, type=outbound, table is the current table
  5. The Script Include function loops through the result, pulling the values for the external system fields and the GlideRecord field values, building an outbound name:value pair payload
  6. The Script Include function triggers an outbound REST message and attaches the payload
  7. The Script Include function processes the response as desired

Important Note: Wherever possible, the Business Rules should be run async. There is more on this in part 2.


If this is built correctly, the major benefit is that future updates to the integration can be completed with updates to the Mapping table, rather than with code. A true low-code, no-code solution!

More to come in Part 2.

]]>
https://sncma.com/2023/05/08/what-weve-got-here-is-failure-to-communicate-part-1/feed/ 0 926
PD (Platform Disfunction) is Treatable https://sncma.com/2023/04/14/pd-platform-disfunction-is-treatable/ https://sncma.com/2023/04/14/pd-platform-disfunction-is-treatable/#respond Fri, 14 Apr 2023 21:34:23 +0000 https://sncma.com/?p=914 The things ServiceNow should change or enhance yesterday

“Continuous improvement is not about the things you do well — that’s work. Continuous improvement is about removing the things that get in the way of your work. The headaches, the things that slow you down, that’s what continuous improvement is all about.” – Bruce Hamilton

I’ve written previously about the power of the platform, and my belief in its terrific original design and flexibility. In recent years, in its push to create and sell products, ServiceNow has sacrificed enhancements to the platform which us architects, developers and admins have to work around and explain to our customers. In this article, I’ll discuss some of the enhancements I wish ServiceNow would implement now (and in some cases should have done long ago). While selfishly these would make my life and the lives of people who manage and work on ServiceNow easier, these are also features that will keep ServiceNow ahead of, or at least apace of, the competition. And let’s not overstate our selfishness – some of these are great for requesters and customers too.

Rich-text / HTML Comments and Work Notes

We don’t live in a plain text world any more. … ServiceNow should enhance Comments and Word notes fields to support rich text and HTML formatting. One of the important outcomes of having this feature is the ability to include inline images and marked up text, so that agents and customers can exchange examples in order to resolve issues more expediently. Consider any IT firm who is troubleshooting a customer issue via Customer Service Management. The ability for both the customer and the support agent to supply screenshots with text and arrows to explain the exact issue or fix is far easier to communicate and comprehend than a plain text explanation (“a picture is worth 1000 words”), or a text description of an attachment that don’t live together in the UI. This method is cumbersome and unintuitive. Additionally, the rich text / HTML notes can go out and be received via email – the bane of our existence but fundamental part of how “business is done” no matter how much we fight it or come up with alternatives. (I don’t know of any email system that doesn’t support this formatting.) Regardless of whether the customer is viewing these marked up notes in the Service Portal or via email, their experience is enhanced, and in best cases, their issue can be resolved faster.

Editable Comments and Work Notes

I’ll include this as a sub-header under the enhanced notes banner. If we’re going to make comments and work notes rich, let’s go ahead and make them editable after saving in select cases and to select people. I say the latter part because if an agent has entered an Additional comment and the system has informed the customer of the comment via email, it’s likely a bad idea to turn around and edit that note. However, there are plenty of cases where the ability to edit a Work note is useful, and sometimes security reasons why (someone has put a password in plain text in a note). I haven’t devised a hard and fast rule of what should be editable when and by whom; let’s start with the functionality and figure it out from there.

Enhanced Attachments

Attachment functionality has been basic since the platform originated: Users with write access to any record in the system can attach files to that record and users with read access can view those files. There isn’t much functionality beyond this other than the ability to add all the attachments to all instances of an outbound email notification.

Customers have been asking for years for additional functionality around attachments:

  • Classifying attachments as internal (fulfillers or employees) and external (requesters or customers), much like Comments and Work notes
  • Specifying more complex security around each attachment on a record
  • Choosing particular attachment(s) to send with an email notification (in real-time)

I’m sure there are others but the point is made. Getting into the weeds on how attachments are stored in ServiceNow is a discussion beyond this article. Suffice it to say there is a great demand for greater flexibility around the classification and security of attachments, beyond “this attachment belongs to this record”.

Requested by and for at the Task* Level

*Assumes you understand the Task table hierarchy and inheritance.

This is one of the most common customizations implementers have been doing as long as I’ve worked on ServiceNow. The basic thesis is this: For every Task (every Task), the system should be able to record, track and report on who requested the work, and who it’s requested for. This seems so simple. I think the lack of this in the platform is residue from the early days when ServiceNow was primarily just an ITSM system, and as such, they put a Caller field on Incident, a “Requested by” on Change, a “Requested for” on Service Request, and then didn’t think past it. In subsequent applications, they added “Requested for” or “Requested by” on certain applications, but it’s not consistent across the platform.

(Some may say “what about the ‘Opened by’ field on Task?”. While it’s great that this field exists at the Task level, consider this: An administrative assistant calls in a request to the Service Desk for something for his CEO boss. The Service Desk opens the Request. In this case, the “Requested by” is the admin assistant, the “Requested for” is the CEO, and the “Opened by” is the Service Desk agent. I think this field is needed and serves a distinct purpose to the others.)

To this end, I’ve worked on many implementations and have often recommended these fields be added to the Task table and they are used as the defacto values on all Task forms, lists, reports, etc. In addition to having a consistent approach and data/field structure on all Tasks (work being performed), it also enhances reporting at the Task level, and can be used to report on organizational performance: How many requests is IT delivering to HR? And vice versa? Having your requesters and assignees all at the Task level, along with good core data, allows you to take your Service Management to the next level. But this should not fall on implementers to customize; ServiceNow should fix the platform so it’s “out of the box” this way.

More Granular Log Timestamps

This is a feature purely for admins and developers. Because the platform timestamps on all records in the Created (sys_created_on) and Updated (sys_updated_on) are granular only down to the second, it’s often hard to troubleshoot the order of processing execution. After all, many of these executions are happening at the millisecond level. For example, if you’re troubleshooting a complex script with lots of logging, when you view the Script Logs or the more general Platform Logs, because you can only sort down to the second, you can’t see exactly the order of your logging. Of course you can number your log statements, but you lose the order of other logging that may be occurring outside of your explicit statements. This is important when other things in the system may be impacting your code. In an ideal world, at least for logging, you could see the exact order of execution. Indeed, this was possible when I was working at ServiceNow and could elevate to maint access on the platform (access above admin only available to ServiceNow employees), and I can tell you from experience it made my troubleshooting much easier.

I’ll hedge my statements by saying this is only really necessary for logging – Task-based work and other auditing is typically fine at the hours:minutes:seconds level.

Other Quick Hits

Here are a few others that I’ve addressed in other articles or will be in the future:

Conclusion

I’ve written about some of the most common areas of concern for customers, things I’ve learned from 13 years of ServiceNow implementation. There’s still plenty of power in the platform – it’s why many of us started working with ServiceNow and what keeps us evangelizing about its power. The ask is simple: ServiceNow should solicit feedback from its most experienced implementers, honing in on the most common platform concerns that birth customizations of all shapes and sizes, and devote some of their massive development resources to these changes and enhancements. I’m sure this can be done in parallel with licensed application development. So do it, and keep this platform great!

]]>
https://sncma.com/2023/04/14/pd-platform-disfunction-is-treatable/feed/ 0 914
Building Core Strength https://sncma.com/2023/02/20/building-core-strength/ https://sncma.com/2023/02/20/building-core-strength/#respond Mon, 20 Feb 2023 01:44:30 +0000 https://sncma.com/?p=856 Why good core data is both the roots and the flowers of your ServiceNow tree

“A tree with a rotten core cannot stand.” — Aleksandr Solzhenitsyn

In the fitness world, and in fact the physical human world, your core is the central part of your body. It includes the pelvis, lower back, hips and stomach. Exercises that build core strength lead to better balance and steadiness, also called stability. Stability is important whether you’re on the playing field or doing regular activities. In fact, most sports and other physical activities depend on stable core muscles.

As ServiceNow has moved further towards being a product company and less a platform company, it’s easy to lose sight of the aspects of the system that are core to its functionality and its value. If you’re solely focused on products*, it’s akin to building big arms and shoulders, and large calves and thighs, but ignoring your back, abs and glutes. Eventually you’ll be a Popeye-ish figure, unable to balance because you’re both disproportionate and “weak in the middle”. In this article, we’ll discuss what’s core to ServiceNow, what benefits having a good core provides, and how to build and maintain this core.

*Products, or Applications and Application Suites, are things like ITSM, CSM, HRSM, ITBM, ITOM, and their components. These are the things ServiceNow builds, markets and licenses on top of the core platform.

What is the ServiceNow Core?

There are several aspects to the core platform. I’ve highlighted some of these in a previous post, primarily the development components that make up all the applications. In this post, I’m going to focus on the core data which in the least drives consolidated reporting, and as I’ll elucidate later, the best gives a full insight into how your business is running.

organization navigation menuFrom a data perspective, the core data are the tables that can be seen in the Organization application in the left navigation menu.

The main tables are:

  • Companies
  • Departments
  • Locations
  • Business Units
  • Cost Centers
  • Users
  • Groups

Note: Vendors and Manufacturers are Companies with particular attributes, not unique tables.

If you look at the schema map for any of these tables, you’ll see how many tables reference these records. For example, the Department table is referenced 746 times in my largely out-of-the-box PDI. Most of these are the CMDB, and indeed, it is hard to use the schema visualization ServiceNow provides because of the number of CMDB tables it needs to draw to represent the schema. If you look at the dictionary references to Department, there are still 36 entries.

However this is just part of the usage of this data. Consider the dot-walking use cases for Department. (If you don’t understand what dot-walking means, please refer to ServiceNow documentation.) Since Department is a field on User record, everywhere a User reference exists, Department can be used by dot-walking to it. Looking at the dictionary references to User, in my PDI there are 784 non-CMDB fields across the platform. So this is 784 places you can inherently filter on or group by Department by dot-walking from the User reference field.

Because the in-platform schema is overwhelmed by the CMDB, I drew a diagram of just how these core tables tie together:

core tables

Note: Depending on your organization, you may not need all these tables populated. Smaller organizations may not distinguish between Departments, Business Units and Cost Centers.

Building and Maintaining Your Core

Experienced system administrators and ServiceNow developers are familiar with these tables and this data. What I’ve often found is there’s an effort during the initial implementation to populate the required tables, then the maintenance is lacking and data becomes stale or messy.

Here are some common examples:

  • The data is imported once and either an integration or an ongoing process for maintaining the data isn’t implemented
  • The company does a re-organization and user departments, cost centers, business units aren’t updated
  • Unique identifiers aren’t determined for the core records and subsequent imports create duplicates
  • Companies treat users like tickets – they just need to be able to login, have the correct roles, and “life is good”

Here’s a small example I ran into recently: A customer had done a series of User imports from other systems without clearly identifying and marking a unique field. An integration was built from Salesforce using the email address as an identifier of a Requester (User) in ServiceNow. An issue was reported after we went live because the Requester was incorrect. The root cause was there were three active user records with the same email address and the system picked the first one sorted by sys_id. This issue was not previously identified because the two bad records weren’t being used by actual users.

In these scenarios, core data quickly becomes utilitarian and not useful for broader service management insight or improvements.

My recommendations for implementing and maintaining good core data are as follows:

  1. Identify sources of truth and system(s) of record for core data. This is an organizational best practice that certainly applies to ServiceNow as well. It’s rare that ServiceNow is or should be the source of truth or system of record for core data, perhaps other than local User Accounts, Groups and Group Memberships. For example, Active Directory is often the system of record for users across the enterprise. As an organization, identify these systems and implement integrations to receive data from these systems of record.
  2. Identify and implement unique identifiers for data records across systems. Akin to my example above, and assuming you’ve done step 1, before importing data from a system of record you need to determine the unique identifier from the source system. Ensure that ServiceNow has this field in the destination table (and import set table), and set up your transform maps or other integration logic to coalesce on this field’s data. This is critical to ensuring duplicate records are not created.
  3. Set up your imports and transforms to ensure core references are populated. You’ll need to order your transformations so that references to core table records on other core tables are populated – see the table diagram above. For example, in order to set the Location on the User record, the Location table needs to be populated first. However, if you want to use the “Contact” field on the Location records, you’ll need the Users in place. The reality is you’ll need to do multiple transformations or scripting to handle this circular logic. (Challenge yourself and try multiple transformations!)
  4. Use the Production (“PROD”) instance as the system of record for core data across ServiceNow instances. Within your ServiceNow environment, PROD should be the system of record for this core data; sub-production environments will get their core data from PROD via clones. You can and should use sub-production for testing core data integrations, but the data itself should come from PROD. This includes Groups and Group Memberships wherever possible, save for a one-off when development requires a new Group that cannot exist in PROD prior to release. (Think about this as you do development – often a Group can be created in PROD without impact to process.) Using PROD as the system of record for this data means you have matching sys_ids of these records across your environments, and references to this data will not break in clones or code and configuration promotions. It is fine and expected to create additional core data records in sub-production instances – test users in your TEST instance for example – but use PROD as your source of truth.

Benefits of a Strong Core

For experienced system administrators and ServiceNow developers who are aware of and/or follow good practices, I haven’t mentioned anything they don’t already know. Sometimes it’s a matter of time and execution rather than knowledge. But what is sometimes not known is why this is important other than having good, clean data in your systems. What the larger benefits from having this data correct and available are.

It is hard to generalize all the benefits into clean, succinct bullet points. What you can do is move past the ideas of “number of tickets open/closed” and SLAs. Here are examples of the use of good core data, and hopefully it will trigger your imagination to think how it might apply to your organization and its business needs:

  • Using the Department and Cost Center data tied to the User References on task-based application records, you can see what organizations are delivering services to what organizations, and use this data for charge-back accounting. For example, IT has completed 500 Incidents for Sales, or HR has fulfilled 300 service requests for Finance. With timekeeping and cost accounting, this data could be used to flow-down into cross-department accounting.
  • Analyze trends of types of Incidents and Service Requests by Location (again by requesting User). This analysis could reveal Incident types that could be converted to Problems that are localized but could be avoided in the future.
  • Group by core data points to determine if certain organizations or locations could benefit from a new or modified service (Incident deflection?)
  • Align new hires and intra-company moves with Department so that standard packages can be pre-ordered (rather than asking each time). For example, the Sales Department employees always get certain access, software and hardware; this could be aligned with the Department so that when a new hire is requested who is part of Sales, the access, software and hardware can be automatically ordered.

For further reading, I did detail an example of what level of service can be provided, reported and analyzed when your core data is complete and used in a previous blog: Tier 5 Service Management.

Hopefully these examples trigger your own ideas about using referential core data to improve insight and improvements to your own organization.

Conclusion

At a cursory level, it is fairly obvious why having good data in ServiceNow is beneficial: clean is always better than messy. However, there’s more benefit than just cleanliness. Having accurate, up-to-date core data can help take your Service Management to “the next level” – both understanding what is occurring in your organization at a deeper level, and being able to make informed judgments about how to deliver Services that maximize benefit and minimize human effort. So start with your roots – good core data – and cultivate the ideas and features that will make your Service Management bloom.

]]>
https://sncma.com/2023/02/20/building-core-strength/feed/ 0 856
Should You Go with the Flow? https://sncma.com/2022/11/15/should-you-go-with-the-flow/ https://sncma.com/2022/11/15/should-you-go-with-the-flow/#respond Tue, 15 Nov 2022 22:33:52 +0000 https://sncma.com/?p=709 A realistic analysis of Flow Designer

In the Kingston release (I think – it’s hard to find the exact history), ServiceNow debuted “Flow Designer”, ostensibly a newer and better way of creating automated workflows. The idea being that the Workflow engine was coming to the end of its useful life, and the platform needed an upgraded way to automate processes and give more power to non-developers and non-ServiceNow admins. Ostensibly fulfilling the marketing pitch of “Citizen Developer”.

I’ve begun working with Flow Designer and completed the primary micro-certification. After both studying Flows and using them in the real-world, I’m struggling to “get onboard” with either the marketing or the reality of using them. Despite my rapidly advancing age, I try to keep an open mind about these things and, when I get frustrated, stop to think, “Am I approaching this from the wrong angle? Am I missing something obvious? Am I too set in my ways?” However, after this experience and these considerations, I have several observational concerns about Flow Designer:

The Flow UI does not align with how humans visualize processes

Flow Designer is a linear, top-down graphical model. Step 2 follows (falls under) Step 1, Step 3 follows (falls under) Step 2, and so on. While I think this is fine for very basic logic flows, rarely do business problems get solved with very simple rules with a handful of steps. Rather, when humans gather to map out business flows and determine what and how many paths are needed to arrive at the solution, or at least the conclusion, they draw them on the whiteboard like this:

Man looking at flowchart

At least they have in my experience. In other words, they draw flowcharts, which can and often do incorporate lots of divergent paths. These are both hard to visualize and to build in Flows. In fact, it’s almost laid out in pseudo-code fashion.

pseudocode

This is how an actual flow appears:

ServiceNow Flow

I’ve seen flows in the real world where an else comes 30-40 steps after its corresponding if, making it quite difficult to visualize as a lot of scrolling is required to see the complete block. And although I’ve heard this is coming, you can’t easily do branches or rollbacks. You simply have to keep adding and nesting steps.

I was recently working on an integrated request application where we were having lots of issues with the Flow because it was almost 140 steps, making it hard to visualize and troubleshoot the end-to-end. I suggested we try a Workflow alternative, and not only were we able to reduce it to less than 30 activities, but when the customer saw it using “Show Workflow”, they immediately said “Oh, this is much easier. I can understand this.”

Flow Designer takes longer to develop and deploy

In my experience, developing the same solution in Flow Designer takes longer than in Workflow Editor, and not by a small amount. When I’ve been working with project teams that are attempting to build solutions in ServiceNow using Flows, and I estimate time to complete using my 12+ years of ServiceNow experience, I’m always underestimating the level of effort, because I’m basing it on how long it would take using a Workflow. Here are a couple of more specific examples:

  • When I was trying to “do the right thing” and learn Flow Designer, I went through the micro-certification course. One of the learning activities was to create an approval. After finishing 20 minutes later, I thought, “this is supposed to be easy?” Here’s ServiceNow’s documentation on it: Flow Approval. I realize it has flexibility, but if you want a simple approval, why all the hassle? Conversely, in Workflow, you add an Approval activity, add the approver(s) to it, and draw the lines out of Approve and Reject. This can be done a few minutes or less.
  • In a recent project, there was an integration to Service Request and Hardware Asset Management that used a Flow to manage the lifecycle of the request. The Flow was over 130 steps. After nearly a year of trying to resolve all issues and go-live, and after HI Cases to figure out why the flow wasn’t working, we scrapped it in favor of a Workflow. The Workflow was less than 30 activities and was deployed in less than 4 weeks.

Part of the marketing pitch is that Flows can be built as small, standalone chunks of work that can be called from other Flows, allowing for quick build of these standalone chunks of work. I see problems with this idea:

  1. Having the knowledge and intelligence to logically break the Flows into the correct smaller Flows is quite difficult, particularly for folks who do not come from or have a design background.
  2. Managing lots of Flows and being able to piece them together into a workable Top-level Flow with Subflows is also challenging. It’s like dumping a series of Lego™ sets on the floor without the instruction sheets and trying to piece together coherent models. It’s technically possible, but unlikely and frustrating to-boot.
  3. If you are able to build the Flow, testing and debugging is harder because you have to drill into subflows to see where the issue may occur. I’ve also found that it’s not super easy to see what data is passing back and forth between the subflows and the calling Flow.

Given these, I haven’t found that non-developers can easily design and build these the correct way, if at all. As I mentioned in one of my examples, I get pulled into troubleshoot a Flow that’s well over 100 steps, because the original developer couldn’t logically factor it into manageable subflows.

Flow Designer is a code-replacement tool that requires code to make work

If you think about what a Flow is, and how it is built, it is essentially a complicated code-replacement tool. Think about data pills – these are graphical dot-walks, and workflow variables. All this is wonderful if you can actually have non-developers creating viable Flows, which is the advertised benefit. I have yet to see this actually happen in reality, for the reasons I’ve mentioned thus far. Here’s another reason: In my experience, when Flow steps don’t work as expected, and there’s not an obvious reason why, the solution is usually to open up the script editor in the Flow step and write the equivalent script to what the flow should be doing without script. This negates the ability for non-developers to create them, or at least to make them production-ready. My experience has been teams using Flows can have less senior folks put together a framework, but require senior developers to actually get them in working order.

Conclusion

My straw polling amongst my peers and teammates is that no one finds Flows easier to use than Workflows, and many either shrug and say, “Well this is what ServiceNow is telling me to do”, or, “This is what I learned in training”. My educated guess is that Flow Designer was introduced to ServiceNow by an executive with enough clout to push it through as a viable Workflow replacement. I’d also guess it was purchased rather than built. After attempting to work with it for the better part of two years, my conclusion is: “Why?”. Why am I compelled to use this product? Why was Workflow scrapped instead of being upgraded?

I’m fairly certain that putting some development heft behind Workflow could have made it more powerful and more flexible, rather than leaving it as is for a more complicated, less useful tool. Until someone who knows tells me that there’s a technical reason Workflows had to be scrapped, or a technical reason I shouldn’t use them (rather than “marketing” reasons), my conclusion is that for the foreseeable future, unless there’s a pre-built Flow that does exactly what I need, I will continue to build in Workflows until I can’t.

Develop based on what you know, not what you’re sold

]]>
https://sncma.com/2022/11/15/should-you-go-with-the-flow/feed/ 0 709
The Three Ws of Development https://sncma.com/2022/01/03/the-three-ws-of-development/ https://sncma.com/2022/01/03/the-three-ws-of-development/#respond Mon, 03 Jan 2022 23:49:06 +0000 https://sncma.com/?p=253 Where, When and Why you should do your development

In journalism, there’s the concept of the Five W questions whose answers are fundamental to getting the information needed:

  • Who
  • What
  • When
  • Where
  • Why

I want to talk about what I call the “Three Ws of Development” in the ServiceNow realm. These three are: When, Where and Why. We’re going to skip the questions “Who” and “What”. Why? Because “who” is a question for hiring managers, recruiting, and resource vetting. And “what” is (too often) the focus of most if not all training and documentation. Do you need to get the current user’s ID on the client side? Check the API – that’s the “what”. Instead, I want to focus on some areas of development consideration that I feel are often neglected, and I’ll explain each and try to put them in context.

Most everyone in the ServiceNow world knows the basic system architecture, which is the same for almost all cloud-based applications:
High Level Architecture
On the ServiceNow side, there’s an Application Server that stores the compiled code, the files (images, etc.) needed by the application, and delivers content to the user’s browser when requested. The App Server connects to a SQL Database that stores all the data records, and in ServiceNow’s case, all the configurations and code customizations created by both ServiceNow and customers.

Now consider a “normal” transaction between these entities. We’ll use one that’s fundamental to ServiceNow: Accessing and updating an Incident record. The following shows all the component parts in this transaction:
Record Update Transaction Life Cycle 1

  1. Client makes a call to the server to get a specific Incident record
  2. Query Business Rules determine if certain records should be restricted from view
  3. Application queries the database for that record from the incident and related tables
  4. Database returns required data to the application server
  5. ACLs are evaluated and applied
  6. Display Business Rules run logic and place data on the scratchpad (to use client-side)
  7. Application Server provides browser with form, data, and all configurations based on 4-6

Record Update Transaction Life Cycle 2

  1. onLoad UI Policies and Client scripts run
  2. User manipulates data, onChange UI Policies and Client scripts run
  3. UI Actions run client-side, server-side, and sometimes both (first client, then server)
  4. On Save, client sends form data to the Application Server
  5. Before Business Rules manipulate record data prior to saving to the Database
  6. Application Server sends data to the Database
  7. After Business Rules run on any table (current, previous still available)

This is a broader “order of execution” list than ServiceNow provides in their documentation, which deals strictly with Business rules.

So how does this apply to our Ws discussion? Let’s discuss:

Where

In considering where to develop your configurations and customizations, you will almost always get better performance having them run on the Application Server rather than the client’s browser session. Observe the middle section of the diagrams above and the components that live in it. For example, when a user accesses a record, if you need to run logic and have the result available on in the client’s session, it is faster performance-wise to run the logic in a display Business Rule and place the result in the scratchpad rather than run the logic in a Client script, particularly if the latter would require querying the database after the record has been accessed and loaded in the client browser session.

Another important use case is security. All security should be done in ACLs, and only supplemented with UI Policies and scripts where needed. ACLs are the only true security, all other methods are posing as security and can usually be bypassed. For example, let’s say you have a Social Security Number field on a User record that should only be available (visible) to Human Resources, and should not be editable by anyone (it feeds from an HR/HCM system). This field should be secured with field-level read ACL for users with an HR role, another field-level read ACL marked false for all other users, and field-level write ACL marked false for all users. If you were to use a data dictionary level read-only marker, this could be bypassed by scripts running as the system. If you were to use a UI Policy or a Client Script to make it visible and/or read-only, this could be bypassed by list views and edits, server-side scripts, and Integration APIs.

In keeping with the last idea, it is also good practice to mimic your client-side logic on the server-side. For example, if you don’t want to force a field to be mandatory for every save, but want to run client side logic that prevents the save of the record without the mandatory field in a particular scenario, you should also create a server-side Business Rule that aborts the save and messages the user about the mandatory field. This way, your logic is enforced in list edits and server-side record manipulation, and not just in form views.
Record Update Transaction Life Cycle 3

  • List edits bypass “standard” Client Scripts and UI Policies
  • External systems integrate directly with the Application Server

Note: You may have noticed that I haven’t mentioned List configurations like List Edit client scripts. These are also usable to “fix” the List edit issues mentioned, but it doesn’t fix server-side logic and integrations.

When

Going hand-in-hand with the Where is the When of development. Specifically, consideration should be paid to when in the lifecycle of the full transaction the development is best to exist. Consider the following Business Rule scenarios:

  • You need to access information client-side that is not available on the current record being accessed but can be ascertained using the current record (e.g. querying information from another table using field data from the current record. The correct time (when) to run this logic is in a display Business Rule and place the needed information in the scratchpad.
  • You need to manipulate field data for the current record before the record is saved to the database; for example, you need to multiply two integer fields from the current record and store the product in a total field on the same record. The correct time (when) to execute this is in a before Business Rule.
  • You need to manipulate field data for a different record using the current record data, and either need to access the “previous” object, or you need the manipulation to be reflected in the user’s updated client session view. For example, you are updating a related record to the current record and the update needs to be reflected in a related list of the record form the user is currently viewing. The correct time (when) to execute this is in an after Business Rule.
  • You need to manipulate field data for a different record using the current record data but it doesn’t need to be reflected in the user’s updated client session view, OR you need to trigger an integration whose updates don’t need to be reflected in the user’s updated client session view. The correct time (when) to execute this is in an async Business Rule. Async Business Rules create a Schedule Job that gets queued and executed as system resources become available.

Why

There are many other scenarios an experienced system admin/developer can think of. A key is to understand all the component parts of the transaction diagrams above; the goal is to configure and develop in the correct place, both where and when. Why? The benefits of developing in the correct time and place are:

  • Performance: Following the when and where guidance will ensure the best system performance. Running logic on the server is almost always faster than on the client. Running logic asynchronously means the client, and therefore the user, means they don’t have to wait for the transaction to complete. Each time the transaction can avoid another database query means faster execution and less wait time.
  • Security: Developing measures that secure fields and records, both visibility and editability (R and U in CRUD), in the correct place, means your instance is actually secure, versus appearing to be secure to an end user. After all, most if not all security breaches do not come from a human clicking and typing.
  • Maintainability: Broader maintainability comes from following good and best practices. Consider a ServiceNow health check – what is the likely result of a system audit if you follow the practices suggested above versus using whatever means are available to create solutions.

Conclusion

ServiceNow provides many means to an end. Sadly, much of the documentation and training does not cover which the correct means to the end; rather, it simply details, “If you need to do X, go here and do this”. It doesn’t say why you should do it this way, or what the overall implications are to doing it that way. What I’ve tried to do is give you a baseline of how the system works, so you can understand where and when the correct places are to do X. Understand the baseline, and stop and think about it before everything you develop. I promise your platform, and hopefully your work life, will be better for doing so.

Happy developing!

]]>
https://sncma.com/2022/01/03/the-three-ws-of-development/feed/ 0 253
Breaking the Code – Designing for Configurable Maintenance https://sncma.com/2021/07/07/breaking-the-code-designing-for-configurable-maintenance/ https://sncma.com/2021/07/07/breaking-the-code-designing-for-configurable-maintenance/#respond Wed, 07 Jul 2021 16:37:00 +0000 https://sncma.com/?p=222 ServiceNow is nothing if not flexible. It certainly gives you options for achieving your business and development goals; some of these are obvious and well documented, and some feel like they’re on the “secret menu” for the die-hards only. Regardless, ServiceNow was designed as a flexible platform.

Over the years I’ve seen this flexibility used in a myriad of ways, some really clever and some head-scratchers. Most fall somewhere in the middle. It’s also been the impetus for the rise of good and best practices, from tribal hearsay to attempted codification. But one thing I feel strongly about is using the pieces of the platform in the best way possible for maintenance by non-coders. This is my good practice.

One of the elements that exists all over the platform is the ability to script your way to a solution. Often times it’s an “advanced” option on a particular element – think Business Rules and Transform Map scripts. For people with a coding background, there’s also ways to use object-oriented concepts to build classes and extensions and call them from most anywhere you need them. What we’ve seen in recent years is ServiceNow itself has moved into this for much of its new product development. Anywhere advanced logic is needed – something that requires logic that can’t be contained in a single core element – ServiceNow development is building advanced, decomposed scripts (usually in standalone Script Includes) and calling them from script blocks in platform elements: ACLs, Business Rules, UI Actions, etc. The issue with this is it requires a coding background to either understand the logic flow, or to make changes required for your business.

(I’m going to go on a slight rant here. If you don’t care to read my angst and just want my good practice suggestion, skip to the next paragraph.) In my opinion, this is simply poor design and is indicative of turning ServiceNow over to coders and not critical thinkers. The founders of ServiceNow gave us all the elements we needed to create code-light solutions but they aren’t being used. I often say to customers, “you can code your way to anything, but this isn’t good design”, but I also see a lot of customers whose admins or outside contractors working on their instances just write code until a problem is solved. Great. So now any troubleshooting or changes require going through mountains of code to try and solve. This is not a sustainable maintenance model and should concern anyone making budget decisions about paying for ServiceNow development and maintenance.

So what am I getting at with this article? What I’m suggesting is that if one is smart with designing solutions, these solutions can be maintained through configuration and not code. The solution looks something like this:

End to end flow

In this scenario, the UI Element is out of the box, and the code and config are created by the developer. What’s important to note is that the code is written once, as part of the development of the solution, and changes to the logic are done through the configuration. In other words, the code is simply a mechanism to connect the business logic to the front end, and the logic is contained in configuration.

Let me go through a more specific example of this. I’ve worked with quite a few customers who are using Customer Service Management (CSM) and ITSM and need them to work together. From a business logic standpoint, CSM Cases need to integrate to ITSM records (Incident, Change, Service Request) when non-CSM teams are needed to solve an issue or fulfill a request in order to complete a Case. The Case remains the purview of the customer and the CSM Agent teams, but backend teams and processes are needed to “finish the job”. I call this a Process Integration or intra-ServiceNow integration – it works much the same as a ServiceNow to other system integration (or SN instance A to SN instance B), but is managed wholly within a single instance and works to integrate disparate records and processes rather than systems.

ServiceNow has recognized this need and created a Service Management integration plugin (check name), but this solution is all code, 100%, down to the field level. If you need your process integration to work any differently than what ServiceNow has provided, you have to write your own code to make it work.

To me, this is clearly not ideal. So what I’ve done is something that looks like this:

CSM to ITSM Integration Model

The key is that we use mapping records to maintain what should go where, when.  The code simply is middleware between the maps and the application – it is kicked off when needed by the application, does lookups against the maps and updates records accordingly. The map records tells the system the following:

  • What record and field to pull from
  • What record and field to push to
  • When the transaction should occur*
  • Any field value translations required

* If you’re really slick, you use condition builders on the map records to determine the when. For example, you may only update the Case State to resolved when the integrated Request completes. Rather than building lots of Business Rules or writing If/Elses into your code, you build it into the mapping record. Now customer admins and even “admin-lites” can maintain business logic in configuration records and not code!

The elements described above can be extended and used for any Task integrations required across the platform. The steps are:

  • Create Task Mappings as required
  • Create Business Rules on the source tables – I usually just add a condition that the source record has to have a related Task-type record – that call the Script Include and pass the source and destination information to them.

I don’t consider any of this out of left field. Why? ServiceNow has already provided a model for this type of solution: Transform maps. While they were designed for importing data into the system, the use case is the same: We need to get data from one place to another, and we need allow for transformation of that data and logic to say when the data should be inserted, updated, or ignored. So why not use the same model for a process integration? 

I’ve described a specific use case, but if you’re thinking along with me, you realize this way of thinking can be used for other development scenarios. For example, I’ve seen quite a few consultants and customers (including myself) develop Service Request fulfillment lookup tables for Approvals, Tasking, routing and other data points. The goal is not to have to maintain dozens or hundreds of workflows (flows) or to have to checkout and edit a workflow every time a simple configuration change is needed. Ultimately, the lesson is this: when you’re designing a solution to a business problem in ServiceNow, always think, “Can I build this in a way that a non-coder can maintain the solution? Does it have to be solved with code only?” I suggest the answer to the latter is rarely “yes”.

Happy designing!

]]>
https://sncma.com/2021/07/07/breaking-the-code-designing-for-configurable-maintenance/feed/ 0 222
The Misconceptions of Upgradeability https://sncma.com/2021/06/01/the-misnomers-of-upgradeability/ https://sncma.com/2021/06/01/the-misnomers-of-upgradeability/#respond Tue, 01 Jun 2021 03:12:00 +0000 https://sncma.com/?p=178 The Blob is a 1958 American science fiction horror film whose storyline concerns a growing, alien amoeboid entity that comes to Earth from outer space inside a meteorite. It devours and dissolves citizens in small Pennsylvania communities as it grows larger, redder, and more aggressive each time it does so, eventually becoming larger than a building.

In recent years in the ServiceNow ecosystem, one of the topics that has taken on a “blob-like” existence is upgradeability: the ability to perform an upgrade to the next ServiceNow release, how long it takes, how much it costs, how much remediation is required, and to minimize all these aspects. I’d posit this is directly related to the increasing costs of the platform, and the ensuing exposure to C-level organizational people, and in particular, those who control the corporate purse strings. In conjunction with this, catchphrases such as “How to make your instance upgrade-proof” and “Stay out of the box (to make upgrades easier)” have taken on an inflated importance in companies’ ServiceNow journey. 

For this article, I’d like offer my opinions on the following topics, based on my experiences of 11 years of dealing with this:

  • What actually affects your upgrades? What does it mean to take ownership of a ServiceNow element and go away from “out of the box”? What does it mean for your upgrades?
  • How do you configure, design, develop and customize in ways that upgrade-proof your instance? 

How Upgrades Work

First, let’s talk about how an upgrade really works. ServiceNow releases a new version of the platform, which includes “black box” items and new items stored in the database. We don’t care about the first (and can’t even if we wanted to). The new database-stored items* are what cause us potential work. There’s three (3) scenarios when ServiceNow attempts to push these items with the upgrade:

  1. A net new item is inserted into your instances. No conflicts occur because it is brand new, not seen in previous versions of the platform.
  2. An existing item has a new version that updates your instance. You have never touched the item, so it upgrades with no issues.
  3. An existing item has a new version that attempts to update your instance, but the existing item has been “touched” by you, so the upgrade skips the update, and you are left to research the differences and determine what to do. This is where the real upgrade work occurs.

* ”Items”: The ServiceNow configuration elements we are familiar with: Business Rules, UI Actions, Policies, Scripts, etc., that live inside of the ServiceNow database

So how does ServiceNow determine what should be skipped? If you’ve worked in ServiceNow for a while, you’re probably familiar with the “sys_update_xml” (“Customer Updates”) table. This is where every configuration or customization you’ve done is captured – the record of every Update Set entry. Without going into detail, any database table in ServiceNow that contains the metadata items that make up the system configuration is marked, and the system adds an entry to the sys_update_xml when any record in these tables is created, updated or deleted. If you update or delete a metadata item (record) that ServiceNow provided “out of the box”, it’s in sys_update_xml and you now “own” it. If you’re following the logic, the ServiceNow upgrade checks sys_update_xml when it attempts to update the metadata items. If it finds an entry in this table, it skips the update. (This includes marking a record inactive! I don’t know how the urban legend began that unchecking “Active” is upgrade-safe, but it’s not.)

I’ve include a screenshot representation of this here:

How Upgrades Work

What this means is you’ve now got a skip log entry, and if you’re doing your due diligence, you check why and determine if and how to remediate. When something is skipped, it is advisable to review and understand what might have been lost in the skip. But it may not be required – “losing out” on new functionality doesn’t mean the system isn’t going to work! 

The other type of upgrade diligence is regression testing your current functionality to make sure nothing broke. This is particularly important for configuration changes and customizations you have made. When something breaks, it could be as a result of a skip. Usually this is a process break rather than an obvious UI issue or system error. For example, a new or change to a Business Rule is conflicting with a new Business Rule ServiceNow provided and causing logic issues within an Application lifecycle flow, rather than an ominous pop-up error message.

Upgrade Proofing Your Work

The previous section is very tactical in nature – what happened, what happens, and what to do about it. Now I’d like to discuss some ways of thinking about your work on the platform that might ease your upgrade processes. We’ll call it “Strategic Design for Easy Upgrades”. 

When you have to develop something on the platform, first consider the Business Requirements being presented. Think about what ServiceNow has provided Application-wise “Out of the Box”, and see how your company’s requirements align with them. Consider particularly:

  • Does the lifecycle of the use case align with the out of box application? 
  • Does the use case have special security requirements?
  • Does the use case have special data schema requirements for a single entity (record)?
  • Does the use case need special notifications?
  • Do end users need the ability to save something as a “draft” before submitting?
  • Do the reporting requirements align with the data schema?

When the business requirements don’t align with any out of the box application or function*:

  • First consider an out of the box application with adjunct functionality
  • Then consider a custom application
    * and the business requirement isn’t flexible

When I say “adjunct functionality”, what I mean is new functionality that runs in parallel with the “out of the box” functionality, not as a replacement or a customization to an existing item. For example, a brand new email notification with a different set of conditions and different content, or a new Business Rule that makes an update to a field that does not drive the application’s lifecycle flow.

I thought about some of the different development actions I’ve taken over the years and put them in the table below, categorized as “Start worrying” and “Sleep easy”, aka, “Probably shouldn’t do” and “OK to do”. In some cases there isn’t an equivalent in the “Start worrying” category, which I suppose is a good thing:

Start worrying Sleep easy
Adding a new state value to an existing application Relabeling a state label that’s conceptually the same (Closed -> Complete)
Changing an out of box Business Rule or Script Include Adding a new Script Include or Business Rule script that provides adjunct functionality
Using DOM manipulation, JQuery or other libraries whose versions change Scripting with any ServiceNow API calls
Changing out of box ACLs for specific business needs Adding ACLs for specific business needs
Changing the fundamental structure or structural intention of an out of box application Creating a custom application where out of box doesn’t meet business needs
Changing the fundamental lifecycle of an out of box application Creating a custom application where out of box doesn’t meet business needs
Adding many fields to a table to meet a reporting requirement Using metrics, referential data, database views and other SN elements to meet reporting requirements
Using “old tricks” to insert or move out of the box fields around in the dictionary Creating your own u_ and x_ fields
  Rearranging a form or list view to meet business needs
  Creating new notifications, flows, workflows, reports, database views
  Using platform elements to enhance the usefulness to the business

So to wrap it up, what I’m saying is if you follow the strategies from part 2 of this article, your work described in part 1 should be a lot easier. And, you don’t have to stay “Out of the Box” to do it. You simply have to understand what it means and work smartly with and around it to make your upgrades painless.

ServiceNow was designed as a platform, with a host of reusable elements. Using these for your business needs shouldn’t get you into upgrade trouble.

However… Just because you can do it, doesn’t mean you should!

Happy developing!

]]>
https://sncma.com/2021/06/01/the-misnomers-of-upgradeability/feed/ 0 178