The Democratization of IT

Dave Bour
11 min readDec 7, 2021
Photo by Arnaud Jaegers on Unsplash

This article is part of a larger series entitled “A Short History of IT” intended to answer ‘What problems does IT exist to solve?’ through a review of the evolution of IT. If this information resonates with you, please consider subscribing to my newsletter or providing feedback. To discuss engagements, check out my website at theitplan.com

Before living in the cloud, applications were hosted on servers in the office of the company who wished to use it.

When the transition to the cloud began in the late 00’s and through the 10’s, an unintentional result was the democratization of application election within companies. The change has rebalanced power within organizations as staff members now have a significant degree of influence over software use within startup, hyper growth, venture-capital-backed, and most all early stage companies. Furthermore, select groups, namely software engineers, have even greater influence than other teams which creates a hierarchical arrangement and favors more technical tools like Atlassian’s JIRA over point-and-click interfaces such as Trello or Asana.

Is this a good thing? Or rather, does this benefit the company? The staff member? The customers? Let’s discuss.

In 2006, if you wanted to use email(!) at your company, your IT source — whether that be an employee in IT, word-of-mouth consultant, or third party IT provider (read: another company that acts as your IT department)) — would follow these steps:

  1. Procure a physical server
  2. Install the server in your office IT closet or co-location facility (A shared IT room somewhere)
  3. Install an operating system, then email application (traditionally Microsoft Exchange)
  4. Configure the application to use the company domain name, add user accounts, and train staff
  5. Deploy an email client on staff computers (traditionally Microsoft Outlook) and train staff to configure it with the server settings and their account information
  6. Send and receive electronic mail

In the pre-cloud era, this process largely exemplified the deployment of any application with an organization — instant message, video conference, file sharing. As you might imagine, it created several barriers to new software deployment within organizations.

First, each decision to deploy new software carried significant productivity and financial risk. All of the costs were up-front — not subscription based, as the company would effectively build its own Gmail franchise. The organization required capital on-hand to purchase the server, software, licensing, warranties, and pay someone to configure and deploy it, plus each employee’s time to install, and be trained to use the software.

In turn, this meant that a meaningful portion of the decision-making process was dedicated to cost justification via return on investment (ROI) and other total cost of ownership (TCO) indicators like capital expenditure classifications and tax write-offs. In today’s business lexicon, we would refer to this as a ‘heavy lift’ — and owing to it, naturally, the number of applications within an organization was few.

Second, consider the IT stack a puzzle with each application its own piece. Assuming you wish to clearly see the picture, It’s very important that a puzzle of four or five pieces seamlessly fits together. On the other hand, if your puzzle has a thousand pieces, some may be entirely missing and have little effect on the overall presentation of the image. In short, the IT ecosystem was much less forgiving to incongruence.

Third, there was no guarantee that the system you chose could reliably connect with the products of other organizations, if at all. For example, if you chose Polycom as your video conference provider, you could only call other companies that made the same decision and those with Cisco video conference systems were out of reach to you. Or if you chose Microsoft Lync for instant messaging, you were silo’d to instant messaging only with other staff within your organization.

To summarize, the number of IT applications remained few in number throughout the 2000’s as a result of technical expertise to implement new services, risk and cost implications, and interoperability between systems.

Nevertheless, the on-premise/few-system approach offered several benefits.

The first was predictability. If I wanted to draw a diagram and share it org-wide, I knew to do it in Visio because everyone else was using the same application.

Along the line of predictability, IT ecosystems were more homogenous and there were fewer options from which you had to choose to perform digital tasks. If I wanted to message you, I knew you would be on email or to call you. I wouldn’t have to send an email, follow up via Slack, then schedule a Zoom call to be sure you received and processed the matter. Alternatively, a single issue management system meant that a ticket raised to IT could be re-assigned to a more appropriate org, such as Engineering. Project management would be completed in the same interface and available to all business leads, which lent itself to consistent templatizing of work across the company.

Third, information security inherently benefited from a restrictive atmosphere — fewer external parties were granted access to internal systems and data. Due to single channel permissibility, there were simply less gaps with which organizations had to protect against exploitation. Since each application required deep technical expertise to configure, specialization led to thorough and intentional configurations and reduced the number of administrators within a platform. In other words, the gate of entry was so narrow the issues analogous to enabling public access to your S3 bucket was less likely to occur.

Finally, a homogenous ecosystem with few tools and a high barrier to administer maintained the burden of expertise at the IT perimeter versus pushing it to the edge where staff members reside.

However, the on-premise/few-system approach also created several problems.

The lack of software options imposed constraints on staff members who either had to learn to use new software (reducing “time to market” productivity for a percentage of new hires) or couldn’t do what they needed to do within the chosen software application. In short, constrained functionality was more prevalent than it is today.

Mentioned earlier, the commitment to a single ecosystem hindered interoperability between companies. Recall the early stages of video conferencing (Sold upon the preface that execs won’t have to fly anymore — think of the first-class ticket cost savings!) whereby if they had Cisco and you had Polycom, you couldn’t call each other (Bluejeans showed up to fill that gap but, like MySpace, were later overtaken by others in the space).

Finally, the responsibility to maintain uptime rested with people and organizations less capable than they are today. Redundancy was expensive and expertise still is, while also becoming scarce in certain circumstances. Less automation meant more human interaction with systems and the heavy investment in all of this made the cost of switching unbearable.

— ASIDE —

Any IT staff member of this era will recall setting up all kinds of video conference meetings. You would work with the IT department or office manager or administrative assistant of the other company to test connectivity beforehand. Of course, there would always be problems the day of the meeting and being pre-speakerphone (speaker and microphone are the same device), noise cancellation, reliable omnidirectional microphones…) meant that each use of the system was taxing the IT and administrative departments.

— END ASIDE —

[Enter] The Era of Software as a Service

Photo by Webstacks on Unsplash

Beginning around 2010, the advent of software democratization took hold and so began the march towards the extreme opposite: fully cloud.

As it stands today, the deployment of SaaS can take very different forms between and within, organizations and industries. In some, the traditional process outlined earlier in this article is still applicable, but with less concern for risk and cost justification. The steps — identify, justify, procure, configure, deploy, train — are largely the same. However, the process changed more rapidly and drastically at early stage companies. Let’s take a look.

  1. Staff member visits webpage of software they prefer to use.
  2. Staff member signs up for service using email and password.
  3. Staff member begins using software and invites others.

In the process above, the introduction of new software to a company’s IT ecosystem shifted from top-down to bottom-up, from tightly controlled and planned to free-for-all and ad-hoc. Most importantly, the authority shifted from a specific department (IT) to the general population but responsibility remained with the department. As this change progresses, IT’s role within a company, but more specifically the problem it is tasked with solving, also substantially shifts to emphasize customer service and de-emphasize deep, specialized expertise.

This change has yet to be reflected in popular culture. If we look at representations of IT staff members in western culture — from TV shows such as ‘The IT Crowd’, ‘The Office’ and to a lesser extent ‘Silicon Valley’ and ‘Mr. Robot’, they are generally portrayed as back-office, shy/meek, know-it-alls, who, more than anything, despise having to help non-technical staff. When they do, it becomes an opportunity to highlight why that may be — Is the cable unplugged? Did you turn it off and on again? Similarly represented are technical professions related to IT — software engineering, server and network architects, even mathematical geniuses (Good Will Hunting, anyone?).

While these characterizations are dramatized for effect, surely there are degrees of truth to them. Ask any seasoned recruiter about their experience targeting IT roles and they’ll tell you how increasingly difficult it is to find the “right” person. That is partially attributable to the applicant pool — while the bulk of the problem lies within how IT roles are scoped.

Generally, we’re looking for two primary characteristics — technical adeptness and a nonjudgemental, warm personality. So long as IT was confined to the back-office, we were fine with candidates lacking the latter. And absent an intentional re-classification of IT’s role, re-focusing it towards customer service should probably be avoided because it requires the latter.

At it’s core, the personality profile of someone who excels at systems-level thinking centers around problem-solving from an operational and logical perspective rather than emotional and circumstantial. They are specific instead of vague and driven by evidence/facts instead of faith/happenstance; which helps to correlate various factors influencing processes and outcomes. If these traits enable an individual to solve technical problems well, why do they often seem mutually exclusive from an affable personality?

Well, the answer is that they’re not — that under conditions promoting frustration, human reactions, in this context, err towards negativity. Interactions that lend themselves to frustration for either party are numerous, but suffice it to say that the individual must achieve balance between the systems-level approach required to solve problems and a more empathetic, individualized approach that accounts for the customer’s emotions, expectations, and lack of understanding.

Or, of course, that the functions of issue-ingestion and issue-investigation are separated by hiring customer service representatives for the IT Helpdesk. These individuals may be trained to solve the most frequently occurring problems and triage the rest through escalation. For the company, this has an added benefit of creating a pipeline into IT for those showing technical prowess — which also alleviates recruitment of identifying and interviewing external candidates.

I personally advocate attempting this approach for the next iteration of IT.

In contrast to on-premise, a SaaS-first environment offers several benefits.

The first, and most obvious, is an increase in productivity at the individual level — staff can use what they know, reducing lead time to output production. The deployment time has been reduced to nearly zero, as new apps are deployed immediately versus the previous era’s 3–6 month timeline for on-premise, hosted applications.

The second benefit a reduction to cost and risk. Again, we see the reduction in deployment requirements making a substantial difference by requiring less resources and maintenance for the application. Since it is no longer hosted by the company, risk is transferred to the service provider; whose contract acts as a form of insurance against loss and downtime. Finally, spreading out the up-front cost over time in the form of monthly or annual subscription allows for greater flow of capital to where it is most needed. There are studies which show a total cost benefit when utilizing SaaS over on-premise but there are studies which show there is no cost benefit, too.

A tangential effect of IT democratization provides benefit to industry and economy via an increase in accessibility to those with less resources; allowing small and emerging companies (and nonprofits) to take advantage of the same productivity suites as giants in their industry. In short, access to tooling is no longer the competitive advantage that once separated those with and those without.

Unfortunately, a SaaS-first environment also creates several problems.

The hairiest is fragmentation and silo’d application usage. Which seems natural or endemic to the nature of no-code and software democratization, in general. If you make it easy to fix problems, you’ll have an abundance of solutions overnight. This carries its own detractions, namely that differentiation between important problems and less-important problems becomes blurs. Because it is so easy to fix either, we no longer ask if we should. Of greater concern, though, is that it acts as a multiplying effect to fragmentation.

We can liken application fragmentation as drag on a vehicle. A negative co-efficient applying to the entire company’s productivity and generation of output.

Think about a time when wanted to share a process diagram that you made in Miro but found the intended recipient is on a team that uses Lucid.

In the previous era, diagramming was part of the productivity suite in Microsoft Office — the same tool everyone used for email, document, and slide management also had a diagram creator. When barriers to software development were reduced, someone came along and said “This solution has these deficiencies, I will create a product that solves for just those deficiencies.” This creates multiple tools from which to choose as the pool of diagramming solutions fragments. This is not a problem in itself, but when combined with democratizing software selection it is the equivalent of driving down a dead-end street and not realizing it until you’re at the very end.

Fragmentation offsets most of the cost benefits of SaaS. When multiple tools are inevitably discovered, the company will often reach a state of consolidation. They must dedicate resources to planning a migration, as the tools are being used in production, and then moving content from ServiceA to ServiceB. Oftentimes, ServiceA doesn’t want you to lose customers and fails to prioritize export or integration functions that allow you to move content out of their platform. All the while, you’ve been paying for redundant tools — no matter how insignificant the problem was that they bought to solve.

The third issue is increased risk of data loss resulting from decentralizing access control to software (and hence, data and information). When application deployment becomes a matrix (anyone can sign up for any app), the inert default application configuration often persists into production. Since it is in the application’s best interest to spread, the default is unlikely to be the most secure/restrictive/protective of company data. Furthermore, as the staff member clicks through all the authorizations the software requires, it will be given perpetual access to domain information, company information, and even data repositories. The proliferation of unfettered access carries numerous concerns, but for the purposes of this discussion, can be seen as negating, and in all likelihood increasing the risk posed to an organization’s most valuable asset — information and data.

Here, risk reduction necessitates a top-down commitment from the organization to developing a mature information security program. If I were a technology weather-person, this formation signals that the next iteration of IT is likely to include a Return of the CISO. The CISO Strike Backs. A New CISO.

If you enjoyed this article, please consider subscribing. For engagements, please visit my website at theitplan.com

--

--

Dave Bour

Building IT infrastructure and teams where there was none before. Fitness, wellness, and adventure enthusiast. Engagements at theitplan.com