The Tectonic Shift in IT

April 3rd, 2018
Keith Thuerk
Storage Engineer


There is a tectonic shift happening in Information Technology (IT) right now, where the industry is pushing generalist roles over specialist roles.

This can be seen directly or indirectly in various IT organizations. An example of this is where a talented colleague leaves her role to follow another passion; this now empty role will, most likely, not be filled; then, the team will be asked to share the duties, known as doing more with less. How does that team-stretching-to-fill-the duties impact your organization? You can visualize this talent void in all facets of IT (networking, storage, security & server admin roles). What is causing this? One might say it’s from the toolsets improving dramatically, not just in duty simplification and use of dashboards making administration duties less complex (i.e. accomplish the same task with fewer steps), but, it could be just performing them for the admin, via incorporating Artificial Intelligence (AI) and Machine Learning (ML). Additionally, the tools streamline duties through workflow efforts and API integrations. However, at best, they only provide for 70-80% of the required functionality, when compared to legacy standalone tools. This is part of the ‘good enough’ IT philosophy of the past few years. Yes, we hear the clamoring over Digital Transformation (DX) and the tight coupling of IT & Business Units, but what are you leaving on the proverbial table?

Additionally, the industry is pushing the adage of rebuild vs troubleshooting for all things IT, not just Virtual Machine (VM) instances and Containers. We see this makes complete sense for the aforementioned technology, but does it make sense for all aspects of IT? Why are we bypassing the entire Break Fix Cycle? What is being missed if you don’t understand the scope and breadth of the issue(s) being reported vs cookie cutter response; just rebuild. If you were a malicious person, don’t you think you could model this behavior and then create an attack around it? Think about this, what if the ‘golden image’ you are rebuilding from has a flaw. Aren’t you just bringing it back to be potentially exploited once again, and thus more rework? A vicious cycle, no doubt.

What underlying problems are being glossed over, and thus, allowed to manifest themselves, as you pile on more and more infrastructure requirements without addressing the root cause up front? We are talking about the ability to troubleshoot and repair problems on the fly. Keep in mind, we are NOT talking about fixing a printer paper jam, nor are we talking about rebooting an instance (physical or VM based). We are talking about the ability to understand complex modern data center issues, such as why a VM instance died or continues to die on an irregular basis. This troubleshooting goes beyond, “Does the VM have enough CPU, RAM, or provisioned adapters?” And, it goes down to the packet level of, “Did it receive corrupt or malformed data?” (Perhaps it was harmfully injected to produce a desired outcome.) How is a generalist going to discern this issue? Think about how this design could impact your Software Defined Storage (SDS) for Storage Archive on your 5-year-old x86 hardware.

How is your generalist going to handle slow performance from your freshly deployed Object store (ObS)? Well, if you designed it properly, they should know that ObS is never about performance, it is about ease of getting to cold/slushy or frozen data; easily. See a need for a specialist yet?

Does your Storage generalist know how to search for performance problems on the VVOLs (Virtual Volumes found in vSphere v6, and higher) recall, which they handed off to the VM team to carve up and do their magic; it is supposed to just work, right? Do they know how to troubleshoot the vStorage API for Storage Awareness (VASA) Provider? When would they force a cutover to a net new VASA Provider, to discern if that is the issue? What other day-to-day issues are being swept under the rug with this new “don’t fix, just rebuild mindset?”

Are you going to set aside all the Operational Expense (OpEx) money, which was saved due to the generalist skill movement, to tap into when you need quick decision support to hire a pro? More than likely, you will not. How quickly can you onboard a subject matter expert to address an issue? How much lost revenue will this cost your enterprise? Have you thought to this level of detail, as you tear down skill silos?

Do not misconstrue my words, I am not defending the IT industries sloth-like response to the ‘IT management application’ problem. I am just trying to cast a large light on the issue of ‘generalist vs specialist’ push in enterprises. It is another tool in the tool bag, not the only approach. Nor, are we attacking your desire to simplify your modern data center; just providing fodder to ensure all decisions are weighed out, and logical approaches are available to handle modern issues. Now, take action and reach out to your Mainline team. See how we can help you architect, build and monitor a top notch modern data center, and perhaps even help fill gaps in your staffing models, to ensure all aspects are reviewed and planned out.

Please contact your Mainline Account Executive directly, or click here to contact us with any questions.

Submit a Comment

Your email address will not be published. Required fields are marked *