Maintenance – SNCMA https://sncma.com A ServiceNow Architecture Blog Wed, 12 Jun 2024 00:53:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://i0.wp.com/sncma.com/wp-content/uploads/2021/06/cropped-gear.png?fit=32%2C32&ssl=1 Maintenance – SNCMA https://sncma.com 32 32 194767795 Wherefore Architecture? https://sncma.com/2024/06/12/wherefore-architecture/ https://sncma.com/2024/06/12/wherefore-architecture/#respond Wed, 12 Jun 2024 00:12:27 +0000 https://sncma.com/?p=1066 If ServiceNow is built to support Citizen developers, why do we need ServiceNow architects?

“Thinking about design is hard, but not thinking about it can be disastrous.” – Ralph Caplan

Introduction

For almost 14 years in the ServiceNow space, and across a rapid expansion of the exosystem, it has been interesting to observe and analyze various organization’s approaches to developing and maintaining their ServiceNow environment. Specifically, how do organizations manage the inflow of business needs, the distribution and velocity of development, configuration and administrative work, and the ongoing maintenance of the platform? As the footprint of ServiceNow has expanded conjunctionally with the expansion of the business functions the platform supports, I increasingly see divergence in these strategies. I’m writing this article to articulate these strategies and provide my view of how each succeeds and fails, along with my recommendations for the correct strategy given a company’s view of ServiceNow.

Management Approaches

There really are two ends of the spectrum for how ServiceNow environments are managed. At one end is the idealized view and what ServiceNow itself espouses: Use of Idea and Demand Management to receive and vet business requirements, an oversight board – which ServiceNow calls a “Center of Excellence” – who does the vetting and prioritization of these requirements, Agile and PPM to manage the development work to fulfill these requirements, and an operational organization that handles release management, break/fix work, upgrades, and performance and security of the platform. If you’ve got your thinking cap on while reading this, you’ll quickly sense that this is intended and works best in the largest ServiceNow implementations, where the platform is a large part of an overall enterprise strategy for a company.

The other end of the spectrum stems largely from traditional IT functions; that is, a purely operational model and mindset where all development is treated as one-off break/fix/enhance type work. Where “keep the lights on” is the primary and sometimes only strategy. In these organizations, ServiceNow development work is typically handled through Incident and/or Enhancement processes, and each task is designed, developed and released “in a silo”, usually without thought to larger strategic initiatives. In other words, the view of the development does not extend beyond the scope of the need elucidated.

With a 25 year career in IT, I’m certainly aware of and sympathetic to this mindset. I find it particularly prevalent in MSP or MSP-like organizations. It’s not that the people running these organizations intend to be “unstrategic” (not a word), it’s what they know. These mindsets are built over years and decades of running IT as an operational entity.

There is a cost to doing business this way – and this is the crux of this article. When you implement under an operational mindset, you necessarily build everything as one-off. Critically, there are no design or architecture considerations taken into account, which means there are concerns for platform maintenance, stability, health and optimization. These can range from the simplest quirks like inconsistent forms and fields, and re-creations of code logic, to large-scale issues with performance and user experience.

Examples

Here are some specific examples of development done without design or architecture prior to “hands-on” work:

  • A developer customizes an out of box ServiceNow application when a custom application would have served the requirement better. This leads to upgrade issues.
  • A developer builds a security requirement using client-side functionality, which is pseudo-security. This security hole is exposed when using integrations and server-side functionality to the same data.
  • A series of requirements for a single application are developed as one-offs. After these are implemented, the UI/UX experience is compromised, as now the form design is cluttered and out of sync with other applications. Adding UI logic and many related lists hinders the performance of form loads.
  • One developer uses Workflow, another Flow Designer, another a series of Business Rules, and another a series of Glide Ajax client scripts, all to implement similar requirements. Maintenance becomes hyper complex as each change or fix is a one-off “head scratcher”; “Now where is this functionality??”

I can argue that Agile is a contributor to this problem. Not the methodology itself, but the incomplete usage of the methodology. I often see organizations going no further than Stories to manage their work. While Stories done correctly are ideal for “state the business requirement, not the technical how” of documentation, without using Epics to group requirements into a cohesive release, and more importantly, without architectural design overseeing the Epic, the Stories are developed in silos and lead to the issues noted above.

Best Practice

In my experience, the best practice is to have an architectural or design review of all requirements prior to development. Some may only need a cursory review to confirm it can be built as a one-off. Others may need a directed technical approach because there are several ways it could be built, and a consistent approach platform-wide is best for long term maintainability. And some may need a complete analysis of pros and cons, buy versus build, custom application versus out-of-box in order to build the right solution for the business need and the platform sustainability.

I’ve included a diagram below that shows the “idealized” process flow, including a Center of Excellence that fulfills this function:

Center of Excellence

The concept of a Center of Excellence, or at least some level of architectural oversight, is not meant to be onerous or a process bottleneck. This is a common concern organizations have, and a reason they don’t do it. My argument is that the operational sweet spot for such a function lies somewhere in the middle of the spectrum: Organizations are free to be as fast and independent with business requirements as they choose. The oversight part of the process is a simple architectural design review of all Stories (requirements) prior to development, with the end result a proper technical approach to development. A good architect will quickly recognize when there are multiple approaches to implement a requirement and provide guidance on the correct approach, taking into consideration all aspects mentioned previously. If the Agile methodology is being used, this can be part of the grooming process.

The diagram above is one I drew for where the Center of Excellence lives in the overall development process, between the requirements gathering, and the execution, either as operational one-offs or as larger project-type initiatives.

ServiceNow’s Documentation on Center of Excellence

In the end, it comes down to an organizational decision, even if not made consciously: “Do we spend the time up front to ensure things are developed in a cohesive platform strategy way, or do we dedicate more time, money and resources to fixing issues when they (inevitably) rise in Production?”  The simple analogy is working to prevent forest fires, or dedicating resources to fighting forest fires.

]]>
https://sncma.com/2024/06/12/wherefore-architecture/feed/ 0 1066
Building Core Strength https://sncma.com/2023/02/20/building-core-strength/ https://sncma.com/2023/02/20/building-core-strength/#respond Mon, 20 Feb 2023 01:44:30 +0000 https://sncma.com/?p=856 Why good core data is both the roots and the flowers of your ServiceNow tree

“A tree with a rotten core cannot stand.” — Aleksandr Solzhenitsyn

In the fitness world, and in fact the physical human world, your core is the central part of your body. It includes the pelvis, lower back, hips and stomach. Exercises that build core strength lead to better balance and steadiness, also called stability. Stability is important whether you’re on the playing field or doing regular activities. In fact, most sports and other physical activities depend on stable core muscles.

As ServiceNow has moved further towards being a product company and less a platform company, it’s easy to lose sight of the aspects of the system that are core to its functionality and its value. If you’re solely focused on products*, it’s akin to building big arms and shoulders, and large calves and thighs, but ignoring your back, abs and glutes. Eventually you’ll be a Popeye-ish figure, unable to balance because you’re both disproportionate and “weak in the middle”. In this article, we’ll discuss what’s core to ServiceNow, what benefits having a good core provides, and how to build and maintain this core.

*Products, or Applications and Application Suites, are things like ITSM, CSM, HRSM, ITBM, ITOM, and their components. These are the things ServiceNow builds, markets and licenses on top of the core platform.

What is the ServiceNow Core?

There are several aspects to the core platform. I’ve highlighted some of these in a previous post, primarily the development components that make up all the applications. In this post, I’m going to focus on the core data which in the least drives consolidated reporting, and as I’ll elucidate later, the best gives a full insight into how your business is running.

organization navigation menuFrom a data perspective, the core data are the tables that can be seen in the Organization application in the left navigation menu.

The main tables are:

  • Companies
  • Departments
  • Locations
  • Business Units
  • Cost Centers
  • Users
  • Groups

Note: Vendors and Manufacturers are Companies with particular attributes, not unique tables.

If you look at the schema map for any of these tables, you’ll see how many tables reference these records. For example, the Department table is referenced 746 times in my largely out-of-the-box PDI. Most of these are the CMDB, and indeed, it is hard to use the schema visualization ServiceNow provides because of the number of CMDB tables it needs to draw to represent the schema. If you look at the dictionary references to Department, there are still 36 entries.

However this is just part of the usage of this data. Consider the dot-walking use cases for Department. (If you don’t understand what dot-walking means, please refer to ServiceNow documentation.) Since Department is a field on User record, everywhere a User reference exists, Department can be used by dot-walking to it. Looking at the dictionary references to User, in my PDI there are 784 non-CMDB fields across the platform. So this is 784 places you can inherently filter on or group by Department by dot-walking from the User reference field.

Because the in-platform schema is overwhelmed by the CMDB, I drew a diagram of just how these core tables tie together:

core tables

Note: Depending on your organization, you may not need all these tables populated. Smaller organizations may not distinguish between Departments, Business Units and Cost Centers.

Building and Maintaining Your Core

Experienced system administrators and ServiceNow developers are familiar with these tables and this data. What I’ve often found is there’s an effort during the initial implementation to populate the required tables, then the maintenance is lacking and data becomes stale or messy.

Here are some common examples:

  • The data is imported once and either an integration or an ongoing process for maintaining the data isn’t implemented
  • The company does a re-organization and user departments, cost centers, business units aren’t updated
  • Unique identifiers aren’t determined for the core records and subsequent imports create duplicates
  • Companies treat users like tickets – they just need to be able to login, have the correct roles, and “life is good”

Here’s a small example I ran into recently: A customer had done a series of User imports from other systems without clearly identifying and marking a unique field. An integration was built from Salesforce using the email address as an identifier of a Requester (User) in ServiceNow. An issue was reported after we went live because the Requester was incorrect. The root cause was there were three active user records with the same email address and the system picked the first one sorted by sys_id. This issue was not previously identified because the two bad records weren’t being used by actual users.

In these scenarios, core data quickly becomes utilitarian and not useful for broader service management insight or improvements.

My recommendations for implementing and maintaining good core data are as follows:

  1. Identify sources of truth and system(s) of record for core data. This is an organizational best practice that certainly applies to ServiceNow as well. It’s rare that ServiceNow is or should be the source of truth or system of record for core data, perhaps other than local User Accounts, Groups and Group Memberships. For example, Active Directory is often the system of record for users across the enterprise. As an organization, identify these systems and implement integrations to receive data from these systems of record.
  2. Identify and implement unique identifiers for data records across systems. Akin to my example above, and assuming you’ve done step 1, before importing data from a system of record you need to determine the unique identifier from the source system. Ensure that ServiceNow has this field in the destination table (and import set table), and set up your transform maps or other integration logic to coalesce on this field’s data. This is critical to ensuring duplicate records are not created.
  3. Set up your imports and transforms to ensure core references are populated. You’ll need to order your transformations so that references to core table records on other core tables are populated – see the table diagram above. For example, in order to set the Location on the User record, the Location table needs to be populated first. However, if you want to use the “Contact” field on the Location records, you’ll need the Users in place. The reality is you’ll need to do multiple transformations or scripting to handle this circular logic. (Challenge yourself and try multiple transformations!)
  4. Use the Production (“PROD”) instance as the system of record for core data across ServiceNow instances. Within your ServiceNow environment, PROD should be the system of record for this core data; sub-production environments will get their core data from PROD via clones. You can and should use sub-production for testing core data integrations, but the data itself should come from PROD. This includes Groups and Group Memberships wherever possible, save for a one-off when development requires a new Group that cannot exist in PROD prior to release. (Think about this as you do development – often a Group can be created in PROD without impact to process.) Using PROD as the system of record for this data means you have matching sys_ids of these records across your environments, and references to this data will not break in clones or code and configuration promotions. It is fine and expected to create additional core data records in sub-production instances – test users in your TEST instance for example – but use PROD as your source of truth.

Benefits of a Strong Core

For experienced system administrators and ServiceNow developers who are aware of and/or follow good practices, I haven’t mentioned anything they don’t already know. Sometimes it’s a matter of time and execution rather than knowledge. But what is sometimes not known is why this is important other than having good, clean data in your systems. What the larger benefits from having this data correct and available are.

It is hard to generalize all the benefits into clean, succinct bullet points. What you can do is move past the ideas of “number of tickets open/closed” and SLAs. Here are examples of the use of good core data, and hopefully it will trigger your imagination to think how it might apply to your organization and its business needs:

  • Using the Department and Cost Center data tied to the User References on task-based application records, you can see what organizations are delivering services to what organizations, and use this data for charge-back accounting. For example, IT has completed 500 Incidents for Sales, or HR has fulfilled 300 service requests for Finance. With timekeeping and cost accounting, this data could be used to flow-down into cross-department accounting.
  • Analyze trends of types of Incidents and Service Requests by Location (again by requesting User). This analysis could reveal Incident types that could be converted to Problems that are localized but could be avoided in the future.
  • Group by core data points to determine if certain organizations or locations could benefit from a new or modified service (Incident deflection?)
  • Align new hires and intra-company moves with Department so that standard packages can be pre-ordered (rather than asking each time). For example, the Sales Department employees always get certain access, software and hardware; this could be aligned with the Department so that when a new hire is requested who is part of Sales, the access, software and hardware can be automatically ordered.

For further reading, I did detail an example of what level of service can be provided, reported and analyzed when your core data is complete and used in a previous blog: Tier 5 Service Management.

Hopefully these examples trigger your own ideas about using referential core data to improve insight and improvements to your own organization.

Conclusion

At a cursory level, it is fairly obvious why having good data in ServiceNow is beneficial: clean is always better than messy. However, there’s more benefit than just cleanliness. Having accurate, up-to-date core data can help take your Service Management to “the next level” – both understanding what is occurring in your organization at a deeper level, and being able to make informed judgments about how to deliver Services that maximize benefit and minimize human effort. So start with your roots – good core data – and cultivate the ideas and features that will make your Service Management bloom.

]]>
https://sncma.com/2023/02/20/building-core-strength/feed/ 0 856
Starting Your “EPA” (Environment Proficiency Activities) https://sncma.com/2023/01/02/starting-your-epa-environment-proficiency-activities/ https://sncma.com/2023/01/02/starting-your-epa-environment-proficiency-activities/#respond Mon, 02 Jan 2023 00:06:02 +0000 https://sncma.com/?p=754 Tips for Optimizing your ServiceNow Instance Management

Having been in the ServiceNow business for over 12 years, you begin to take certain things for granted. At least until you see someone doing something that you don’t expect and it makes you say, “hmm, maybe this isn’t common knowledge”. One such area where I’ve noticed this in recent years is the management of an overall ServiceNow environment – that which encompasses all your ServiceNow instances, and the processes and techniques by which configuration, code and data is built, moved and shared between them. In my experience, if done properly, this should be a minor undertaking from a time and manpower perspective. However, I’ve witnessed customers having teams of people and days of effort to manage this. To this end, I’ll share my experiences in how I was able to make this exercise a minimal part of your ServiceNow time within the SDLC.

Ways to maximize your environment management efficiency:

  • Time-box your update sets. This means using one update set per application scope per development period. Why? When developers are working in the same application space, and sometimes even when they are not, they may work on the same element within the development timeframe. If they are working in their own update sets, if they are moved out of order, you will have to resolve conflicts on the destination instance(s). Here’s an example (we’ll use an Agile environment):
    • Developer Bob has a Story to add a field to the Incident form in the main section (0) and creates a Story-based update set to capture his work.
    • Developer Jenny has a Story to remove a field from the Incident form in section 2 and creates a Story-based update set to capture her work.
    • Jenny finishes her work first and demos the change to the business owners. She received feedback that they actually want the field in section 3 now.
    • In the meantime, Bob finishes his work, demos it, and receives sign-off.
    • Jenny makes the updates asked for, demos it, and receives sign-off.
    • Jenny moves her update set to TEST and commits it.
    • The next day, Bob moves his update set to TEST and receives a warning that a newer update exists for the Incident form. He now has to resolve the conflict and hopefully not overwrite the latest version of the form.

    This is obviously a very basic example, but gives you the idea. This could apply to almost any element in the system – Form and List views, Business Rules, Script Includes, Flows and Workflows, etc. In case you’re wondering – and I want to be very clear on this – Application Scoping does not help with this. In this example, both would be working in the same scope, and there are exactly zero reasons why you should ever do scopes by story or developer.

    The last thing I’ll say on this point is that the concept applies whether you are doing project-based work – e.g. deploying a new application or releasing a large set of functional changes – or managing ongoing maintenance and enhancements. Either situation should include time-boxed development cycles if the development organization entity is being run properly.

  • Make your production instance the source of truth for core data. Core data includes Locations, Departments, Vendors, etc; but it also includes Users and Groups. If you make production your source of truth* (* – at least within your ServiceNow environment – I’m not suggesting ServiceNow should be a source of truth for any of this data), you can create and update these records in production and XML export them back to your sub-production environments. In this way, they will always have the same sys_ids and configuration using them will not fail.There may be scenarios where you absolutely cannot have this data in production prior to “going live” with what you are developing. (I’m thinking of a new Assignment Group that cannot show in the pick list until the new development is live.) In this situation, create in your development environment, and document its move to test and production as part of your go-live planning.
  • Make cloning a standard process occurrence. This applies to ongoing environment management rather than a new customer going live on ServiceNow for the first time. As an organization, make it a standard to clone your sub-production environments from production on a regular, scheduled basis. And then actually do it. Here are the benefits:
    • It keeps the environments in sync – what’s been released in production is reflected in sub-production. This makes regression testing easier. It also makes upgrade testing easier – if your upgrade works in sub-production you can have confidence it will work in production.
    • It settles your development lifecycle into a cadence. If your developers get used to wrapping up their work within certain timeframes, and having code freezes in certain timeframes, it will make your processes better align with SLDC standards, and make for a less chaotic development environment overall.

    This is not without challenges. For instance, the clone schedule may coincide with urgent incident resolution or longer-term project-style implementations. We’ll talk about managing and mitigating these a little later.

  • Use the Update source – Remote Instance records to move update sets. ServiceNow has the setup to allow you to connect to other instances and pull all completed Update Sets from that instance that don’t already exist in the destination instance. Contrast this with using the XML Export and Import functionality to do it manually. (I assumed this was common knowledge but I see a lot of developers and admins doing it the “XML” way.)

Update Source Records

Here’s a diagram of the process I used at Workday (as a customer). This is a 4-week view, and is one full cycle of two (2) Sprints and a clone back at the end of the 4-week period. To the earlier points, the update sets were limited to one per Sprint per Application Scope.

Release Clone Cycle

At the end of each Sprint, we used the last 2 days to move code to TEST, do our regression testing, and move to PROD. After the second Sprint and the finalization of code moves, we would use the weekend to clone back to TEST and DEV (one per day). We would also use the code freeze days to complete our documentation and do other system maintenance activities.

A few other notes about moving code and configuration in 2 week Sprint cycles:

  1. Urgent fixes (Incidents) should be managed separately in their own update sets and moved “out of band” – outside of the normal Sprint cycle – in order to maintain a stable environment. (And of course as part of a Change Management process.)
  2. For longer term development projects where the work cannot be completed in 2 weeks, whenever possible incorporate the idea of a modified “walking skeleton” in order to close and move update sets in the normal time frames. This involves moving the code and configuration to production but hiding it until ready for go-live – inactivating the visible components, or making visible to admins only, so end users don’t see it.
  3. For exceptions that cannot be dealt with in any of the previous ways, use the XML export of the update set to back up unfinished work before the clone, then re-import and set back to In progress. Or, move the update set to production and leave in a loaded or previewed state. I haven’t found one to be better than the other, pick one and make it the standard.

I’m just beginning to scratch the surface on some of this, but I hope this gives you a general sense of how you can minimize your environment management efforts. As I say, in my experience, most of the activities using these techniques became rote and took little time in the overall SDLC. If you’re spending more than an hour or two a week, and it takes more than one person, consider some changes.

“Efficiency is doing better what is already being done.”

]]>
https://sncma.com/2023/01/02/starting-your-epa-environment-proficiency-activities/feed/ 0 754
The Three Ws of Development https://sncma.com/2022/01/03/the-three-ws-of-development/ https://sncma.com/2022/01/03/the-three-ws-of-development/#respond Mon, 03 Jan 2022 23:49:06 +0000 https://sncma.com/?p=253 Where, When and Why you should do your development

In journalism, there’s the concept of the Five W questions whose answers are fundamental to getting the information needed:

  • Who
  • What
  • When
  • Where
  • Why

I want to talk about what I call the “Three Ws of Development” in the ServiceNow realm. These three are: When, Where and Why. We’re going to skip the questions “Who” and “What”. Why? Because “who” is a question for hiring managers, recruiting, and resource vetting. And “what” is (too often) the focus of most if not all training and documentation. Do you need to get the current user’s ID on the client side? Check the API – that’s the “what”. Instead, I want to focus on some areas of development consideration that I feel are often neglected, and I’ll explain each and try to put them in context.

Most everyone in the ServiceNow world knows the basic system architecture, which is the same for almost all cloud-based applications:
High Level Architecture
On the ServiceNow side, there’s an Application Server that stores the compiled code, the files (images, etc.) needed by the application, and delivers content to the user’s browser when requested. The App Server connects to a SQL Database that stores all the data records, and in ServiceNow’s case, all the configurations and code customizations created by both ServiceNow and customers.

Now consider a “normal” transaction between these entities. We’ll use one that’s fundamental to ServiceNow: Accessing and updating an Incident record. The following shows all the component parts in this transaction:
Record Update Transaction Life Cycle 1

  1. Client makes a call to the server to get a specific Incident record
  2. Query Business Rules determine if certain records should be restricted from view
  3. Application queries the database for that record from the incident and related tables
  4. Database returns required data to the application server
  5. ACLs are evaluated and applied
  6. Display Business Rules run logic and place data on the scratchpad (to use client-side)
  7. Application Server provides browser with form, data, and all configurations based on 4-6

Record Update Transaction Life Cycle 2

  1. onLoad UI Policies and Client scripts run
  2. User manipulates data, onChange UI Policies and Client scripts run
  3. UI Actions run client-side, server-side, and sometimes both (first client, then server)
  4. On Save, client sends form data to the Application Server
  5. Before Business Rules manipulate record data prior to saving to the Database
  6. Application Server sends data to the Database
  7. After Business Rules run on any table (current, previous still available)

This is a broader “order of execution” list than ServiceNow provides in their documentation, which deals strictly with Business rules.

So how does this apply to our Ws discussion? Let’s discuss:

Where

In considering where to develop your configurations and customizations, you will almost always get better performance having them run on the Application Server rather than the client’s browser session. Observe the middle section of the diagrams above and the components that live in it. For example, when a user accesses a record, if you need to run logic and have the result available on in the client’s session, it is faster performance-wise to run the logic in a display Business Rule and place the result in the scratchpad rather than run the logic in a Client script, particularly if the latter would require querying the database after the record has been accessed and loaded in the client browser session.

Another important use case is security. All security should be done in ACLs, and only supplemented with UI Policies and scripts where needed. ACLs are the only true security, all other methods are posing as security and can usually be bypassed. For example, let’s say you have a Social Security Number field on a User record that should only be available (visible) to Human Resources, and should not be editable by anyone (it feeds from an HR/HCM system). This field should be secured with field-level read ACL for users with an HR role, another field-level read ACL marked false for all other users, and field-level write ACL marked false for all users. If you were to use a data dictionary level read-only marker, this could be bypassed by scripts running as the system. If you were to use a UI Policy or a Client Script to make it visible and/or read-only, this could be bypassed by list views and edits, server-side scripts, and Integration APIs.

In keeping with the last idea, it is also good practice to mimic your client-side logic on the server-side. For example, if you don’t want to force a field to be mandatory for every save, but want to run client side logic that prevents the save of the record without the mandatory field in a particular scenario, you should also create a server-side Business Rule that aborts the save and messages the user about the mandatory field. This way, your logic is enforced in list edits and server-side record manipulation, and not just in form views.
Record Update Transaction Life Cycle 3

  • List edits bypass “standard” Client Scripts and UI Policies
  • External systems integrate directly with the Application Server

Note: You may have noticed that I haven’t mentioned List configurations like List Edit client scripts. These are also usable to “fix” the List edit issues mentioned, but it doesn’t fix server-side logic and integrations.

When

Going hand-in-hand with the Where is the When of development. Specifically, consideration should be paid to when in the lifecycle of the full transaction the development is best to exist. Consider the following Business Rule scenarios:

  • You need to access information client-side that is not available on the current record being accessed but can be ascertained using the current record (e.g. querying information from another table using field data from the current record. The correct time (when) to run this logic is in a display Business Rule and place the needed information in the scratchpad.
  • You need to manipulate field data for the current record before the record is saved to the database; for example, you need to multiply two integer fields from the current record and store the product in a total field on the same record. The correct time (when) to execute this is in a before Business Rule.
  • You need to manipulate field data for a different record using the current record data, and either need to access the “previous” object, or you need the manipulation to be reflected in the user’s updated client session view. For example, you are updating a related record to the current record and the update needs to be reflected in a related list of the record form the user is currently viewing. The correct time (when) to execute this is in an after Business Rule.
  • You need to manipulate field data for a different record using the current record data but it doesn’t need to be reflected in the user’s updated client session view, OR you need to trigger an integration whose updates don’t need to be reflected in the user’s updated client session view. The correct time (when) to execute this is in an async Business Rule. Async Business Rules create a Schedule Job that gets queued and executed as system resources become available.

Why

There are many other scenarios an experienced system admin/developer can think of. A key is to understand all the component parts of the transaction diagrams above; the goal is to configure and develop in the correct place, both where and when. Why? The benefits of developing in the correct time and place are:

  • Performance: Following the when and where guidance will ensure the best system performance. Running logic on the server is almost always faster than on the client. Running logic asynchronously means the client, and therefore the user, means they don’t have to wait for the transaction to complete. Each time the transaction can avoid another database query means faster execution and less wait time.
  • Security: Developing measures that secure fields and records, both visibility and editability (R and U in CRUD), in the correct place, means your instance is actually secure, versus appearing to be secure to an end user. After all, most if not all security breaches do not come from a human clicking and typing.
  • Maintainability: Broader maintainability comes from following good and best practices. Consider a ServiceNow health check – what is the likely result of a system audit if you follow the practices suggested above versus using whatever means are available to create solutions.

Conclusion

ServiceNow provides many means to an end. Sadly, much of the documentation and training does not cover which the correct means to the end; rather, it simply details, “If you need to do X, go here and do this”. It doesn’t say why you should do it this way, or what the overall implications are to doing it that way. What I’ve tried to do is give you a baseline of how the system works, so you can understand where and when the correct places are to do X. Understand the baseline, and stop and think about it before everything you develop. I promise your platform, and hopefully your work life, will be better for doing so.

Happy developing!

]]>
https://sncma.com/2022/01/03/the-three-ws-of-development/feed/ 0 253
A Good Leader Fears the Citizenry https://sncma.com/2021/12/02/a-good-leader-fears-the-citizenry/ https://sncma.com/2021/12/02/a-good-leader-fears-the-citizenry/#respond Thu, 02 Dec 2021 02:10:28 +0000 https://sncma.com/?p=250 “a camel is a horse designed by a committee” – Proverb

Over the past few years, there’s been a movement in the business world to espouse the concept of “citizen development”. Gartner defines a citizen developer in this way:

A citizen developer is an employee who creates application capabilities for consumption by themselves or others, using tools that are not actively forbidden by IT or business units. A citizen developer is a persona, not a title or targeted role. They report to a business unit or function other than IT.

Gartner: Definition of Citizen Developer

ServiceNow has espoused this concept in its marketing materials. You’ll see any number of websites, references, articles and online sessions talking about the need for citizen development and how ServiceNow supports it. From their website:

ServiceNow offers a series of citizen development tools, from low code to no code.
  • APP Engine Studio and Templates
  • Flow Designer
  • Process Automation Designer
  • Integration Hub
  • Virtual Agent
  • Predictive Intelligence
  • Performance Analytics

ServiceNow: What is Citizen Developer

For the purposes of this article, I’m going to assume that citizen development is marked by these important aspects:

  • The development does not involve writing code, particularly logic constructs
  • The development does involve GUI elements that allows drag and drop construction
  • The developer does not need to be concerned with version control or other development processes
  • The development does not involve environment promotions or other release management aspects

Based on the blurb above, I’m going to assume that citizen development in ServiceNow (the platform), as considered by ServiceNow (the company), can involve any or all of the following:

  • Flows using Flow Designer
  • Table creation
  • Data elements (fields)
  • Data dictionary attributes
  • Form builds
  • UI Policies
  • Catalog Items

I’m sure I’m leaving quite a bit out, but I wanted to hit the “big stuff”. To me, this list encompasses the things Fred Luddy envisioned when the platform was first devised and built in 2004 (Tables, Fields, Forms, Policies) and the current marketing push around Flows to control all business logic.

Ultimately, what I think ServiceNow is envisioning is the ability to take a whiteboard or Visio-type flow and turn it into actual configuration in the platform, particularly by folks who aren’t developers by trade. This is certainly the Flow aspect. Additionally, the “Fred stuff” included being able to develop new data entry forms and business logic around them to digitize what was previously done on paper, email or spreadsheets. (Really, when you boil down ServiceNow to its core value, this is it.) Also done by “non-developers”.

Here’s the fundamental flaw in this thinking: This level of development democracy assumes the guardrails are big enough and strong enough to prevent the introduction of flaws into the system. These are simple flaws – inconsistency in form design or label nomenclature, and large flaws – applications that don’t fit platform architecture standards set by either ServiceNow or the platform owner(s). So what are the guardrails? ServiceNow will tell you Application Scoping, which silos application elements by namespace and security. However, if you are a platform owner – responsible for the overall health of the entire system – what good are silos if every silo contains a giant mess? It reminds me of the closet scene from The Marx Brothers “A Night at the Opera”.

Night at the Opera
Night at the Opera

I certainly have not seen cost-of-maintenance studies done on this, but I have a hard time imagining that the cost of “cleaning up” and/or maintaining the applications developed by non-developers is less than development that uses an architecture, a methodology, and developers.

Consider this analogy: If you were going to try and develop a cogent political philosophy, would it be wiser and easier to read the missives of a few political experts across a range of theories, or turn to social media and attempt to cull the same from it? I often think of social media as citizen development. It’s the “everyone contributes and we hope something cohesive comes out of it” mentality. If this has worked, I’ve yet to see it happen.

I think of another analogy from my college days: In my Operational Management class, specifically the time focused on lean manufacturing, my professor told a story of visiting various auto manufacturing plants. When he visited the Mercedes-Benz factory, he told of a team of highly paid, highly skilled engineers who sat at the end of the assembly line and reviewed and fixed every defect in the car before it passed inspection. So when a consumer buys a Mercedes-Benz, they know for an expensive price, they are getting a quality car. (We’ll leave the cost of engineering and parts out of the equation for now.) Conversely, when he visited a Toyota factory, he saw that every line worker had an alarm they could ring and stop the assembly line whenever they noticed a potential defect caused by anything in the system – process, parts, etc. The lean manufacturing methodology accepts upfront costs of line stoppages in order to eliminate the possibility of downstream defects and the cost of correction. The moral of the story is each way results in a quality product, but the way to get there is completely different.

To come full circle, I see citizen development as choosing the Mercedes-Benz way – where the acceptance of inherent defects means choosing your developers’ efforts will be on the back-end. I say this because, in my experienced opinion, I’ve not seen where the result of citizen development is clean, designed to platform standards, defect-free, and simply able to be deployed with no concern. Conceptually, there likely are ways that putting up enough guardrails is possible, but I don’t think it’s possible in the ServiceNow platform today. Specifically, here are the things required for development, or related to citizen development in ServiceNow, that are outside of the abilities of non-ServiceNow developer:

  • Understanding how an application should be built given a platform architecture strategy.
  • Where a new table should live in the database structure.
  • Deciding what form views should exist and whether they should be forced.
  • Any management of record security.
  • Any coding. If you’ve worked with ServiceNow, you know there are plenty of business logic aspects that cannot be implemented without some code.
  • Troubleshooting Flow Designer bugs. As of Quebec, Flow Designer is still buggy and simply doesn’t work as expected.
  • Working with variables on Catalog Items. Since Catalog Items are a big aspect of the ServiceNow citizen developer push, it implies that a citizen developer knows how to define the correct variable types, when to use variable sets, how to do advanced reference qualifiers, and all the other “developer-y” stuff related to variables.

As I said many years ago at a Knowledge conference when the concept of citizen developer was introduced, “Ask a citizen developer when they should or should not extend the Task table.” I’d love to be proven wrong on this, but I’m still waiting.

Happy developing!

 

]]>
https://sncma.com/2021/12/02/a-good-leader-fears-the-citizenry/feed/ 0 250
To Scope or Not to Scope https://sncma.com/2021/10/02/to-scope-or-not-to-scope/ https://sncma.com/2021/10/02/to-scope-or-not-to-scope/#respond Sat, 02 Oct 2021 00:43:07 +0000 https://sncma.com/?p=228 ServiceNow introduced the concept of Scoped Applications in the Fuji release. From their documentation:

Application scoping protects applications by identifying and restricting access to application files and data. By default, all custom applications have a private scope that uniquely identifies them and their associated artifacts with a namespace identifier. The application scope prevents naming conflicts and allows the contextual development environment to determine what changes, if any, are permitted.

Source: Application scope

Each ServiceNow application has an application scope that determines which of its resources are available to other parts of the system. Application scoping ensures that one application does not impact another application. You can specify what parts of the application other applications can access by setting application access settings.

Source: Understanding Application Scope on the Now Platform

The concept is that application code, configuration and data is programmatically “siloed”. This is the ServiceNow version of the old encapsulation concepts in object-oriented programming (OOP). In ServiceNow, applications across scopes have methods of interfacing with each other, but the elements that make up the application (tables, UI element, scripts, UI configuration) cannot be directly accessed by other applications unless granted specific permission to do so.

An important note: When ServiceNow moved to application scoping, all existing platform applications and elements moved into the “global” scope. This is still a scope, but with special properties – it is by its nature more accessible. In general, by default any other application can access global elements and data, rather than by default not being able to access.

Since its introduction, ServiceNow has been touting application scoping as a panacea to many development ills. The stated “best practice” is to always build new applications inside of a scope, and if you’re familiar with installing new applications from plugins or the ServiceNow store, you’ve likely seen quite a few scopes added to your platform environments. For example, in my Personal Developer Instance (PDI), which has very few plugins installed and no Store apps, I have 115 primary application scopes. (The main thing I’ve installed outside of the baseline platform is Customer Service Management.) To me, ServiceNow has gotten “scope crazy”. For example, in my PDI, just within the Change Application there are these scopes:

  • Change Management – Approval Policy
  • Change Management – ATF
  • Change Management – CAB Workbench
  • Change Management – Change Schedule
  • Change Management – Color Picker
  • Change Management – REST API

I can see having Change Management within its own scope. But why are six (6) needed? Does a change approval policy need to be in its own scope? If you’ve tried to make some basic configuration changes within an application, like a reference qualifier or a dictionary override, you know what a pain it can be to have to constantly change scopes and manage all the resulting update sets.

So why has this change been made? Here are some of the stated benefits of Application scoping:

  • “Containerization”: All aspects of the application are contained in a namespace
  • Security: By default, parts of the platform outside of the application have limited to no access to the application
  • Versioning and governance: The entire application has a version, versus the individual elements
  • Deployment: Much like versioning, the entire application can be deployed and removed as an entire entity
  • Isolating development: Because of the above, large development organizations can segregate development by application

Sounds wonderful, doesn’t it? I’m in total agreement with the concept. However, there are several known and stated qualifiers to developing this manner (in ServiceNow):

  • If you are building an application that extends from an existing application in another scope – including global – any element you want to leverage has to be copied into your application scope. For example, if you need a Business Rule’s logic from global, you have to copy the whole Business Rule to your application scope.
  • There are different platform APIs for global versus scoped applications. Many of the API calls are the same, but not all of them are. You have to be sure you are looking at the correct API documentation for your current use case.
  • Managing development can get more complicated – you’ve got to pick the correct application in addition to the correct update set. This gets more complicated if you are developing across scopes.

Additionally, I’ve noted some other concern in my experience working with this functionality:

  • By isolating development, there is a human nature aspect to consider – that developers are less likely to consider platform architecture when developing. There may be a comfort level with the siloing which might lead development organizations to skip oversight of design and standards.
  • Having to develop so that applications can use cross-scope APIs and elements adds a non-trivial amount of overhead to development times. In my experience, I’ve noticed a 20-25% uptick in time to develop inside of non-global applications. This is not an across-the-board universal uptick; rather, most things take the same amount of time, and once in a while something that would take 5 minutes in the global scope takes 2 hours in the application scope, as you scratch your head and desperately search why something that works quickly and easily in global doesn’t work at all in your application scope. So there is a large uptick in time on a small number of elements that averages to a 25% total time increase over the course of a development project.
  • Pushing upgrade sets across environments using multiple application scopes causes more conflicts and errors than just using global.

My conjecture is that had ServiceNow been designed with scoping from day 1, these probably would not even be concerns – it would have been worked out and eliminated in the platform design process. Instead, scoping has been bolted on “after the fact”, and it raised these and other concerns I’m probably forgetting.

I’ve thought a lot about the question of whether to scope or not to scope. I get asked about it all the time on projects that require custom applications. In my experience, these are the actual reasons to use an application scope:

  1. You are building something you want to package to put on the ServiceNow Store or Share site, outside of your ServiceNow environment(s).
  2. You save money on licensing.
  3. You aren’t versed enough in the platform to feel comfortable implementing isolation techniques in the global scope, such as ACLs, or you aren’t comfortable with your developers for the same reason.
  4. You work in a massive development house with many teams developing in ServiceNow without any centralized management of the design, architecture and QA/QC of the platform.
  5. You want to be able to say you followed ServiceNow’s advice.

What if these don’t align with your needs? My advice is that you do not need to create new application scopes in order to do custom development. Do what you’re comfortable with. I prefer the ease of developing in the global scope, and I much prefer architectural oversight of the development happening in the platform. When I consider a set of business needs, and determine custom development is required to meet them, I want to be able to determine if I should extend base functionality and use existing elements without having to deal with the issues previously mentioned with application scoping. I also want my developers thinking about how anything they’re developing affects the larger platform. I think that not scoping actually encourages and trains design and development discipline.

I’ll caveat all this by saying that I learned ServiceNow prior to application scoping, and I am comfortable managing development and updates in a single scope. If you’re not comfortable designing and developing with the whole platform in mind, by all means use the tools ServiceNow has provided. I’m not saying it’s a bad thing, I’m simply saying that I don’t think it’s necessary and can have negative side effects. As always, think before you act

Happy developing!

]]>
https://sncma.com/2021/10/02/to-scope-or-not-to-scope/feed/ 0 228