Enterprise Adoption of AI: A Conversation with 451 Analyst, Eric Hanselman
At KubeCon North America in November 2024, SUSE announced the launch of SUSE AI, a secure, extensible deploy and runtime platform for GenAI.
At the same event, I sat down with Eric Hanselman, Chief Analyst at 451 Research, part of S&P Global Market Intelligence, to discuss the state of the enterprise AI market and emerging AI trends.
Some of the key takeaways from the conversation include:
- The AI market is experiencing a high level of enthusiasm and intent, with varying levels of maturity of AI efforts in organizations seeking to leverage it.
- Data quality and data security are major challenges in AI implementation.
- Infrastructure is another challenge, as organizations require a level of skill in infrastructure management that most have not yet mastered.
- Regulatory constraints are expected to come from all geographies, necessitating global data protections.
- Companies need to be ready for changes in the future and avoid getting locked into specific architectures that could limit their use of AI.
Read the full transcript of the conversation below to learn about the challenges enterprises face when implementing GenAI, and what to be prepared for as the technology advances.
Interview Transcript
Stacey Miller: Hi Eric, can you share an overview of the current state of the AI market?
Eric Hanselman: We’re in an environment where there’s a huge amount of enthusiasm, a huge amount of intent in a level and a scale that we haven’t seen in other technology transitions. That’s good news in some ways and that’s bad news in others. We have a lot of organizations that are lunging forward to be able to grasp a lot of the benefits of AI. The challenge, of course, is that the maturity of some of those efforts isn’t where it needs to be to be able to deliver a lot of the promises.
SM: What are some of the most significant trends you’ve seen within companies that are adopting AI?
EH: I think the predominant trend is really trying to get a handle on the data pipeline, ensuring that they’ve got the data they need in the places they need it and are able to manage it in ways that they can effectively put it to work for AI…
The other big piece is ensuring that they have the infrastructure necessary to be able to deliver AI capabilities… To be able to actually build infrastructure that’s going to allow them to put AI to work no matter where they need it, in their core environments, which of course has been relatively simple, but also moving out to the edge and being able to do that with a platform that gives them the capabilities to do that at scale is something that takes a step beyond a lot of the skill shifts that were necessary to really master cloud native. Because there’s the integration of data that’s necessary, that’s the security aspects on both sides of the equation of leveraging AI. It’s all of those pieces that have to come together, and those are, I think, the things that are the biggest focus in terms of AI trends today.
SM: Let’s talk a little bit more about those considerations and data security. How much focus are companies putting on the data and the sovereignty over the data: knowing where the data is and being in control of that data?
EH: Concerns about data are of course critical. Data is the fuel that drives AI. The challenge that we see, especially if we look at measures such as what are causing AI projects to fail, data quality is one of the leading challenges. It means that organizations are still wrestling with ensuring that they have the right data in the right places. When we think about data security, data of course is critical to the organization. There are significant security concerns about data exposure. And in many cases organizations haven’t become particularly sophisticated in terms of their data protection capabilities. That means that they’ll tend to rely on physical location to address data sovereignty concerns.
They may also have regulatory concerns. They may have restrictions about where they can place their data from a regulatory requirement perspective. Those are areas that, as we look toward next stages of AI, being able to put together cryptographic protections for data, being able to have infrastructure that’s better able to secure their data, are clearly critical. But that’s an area I think many organizations are working on. But it is one of those predominating concerns about leveraging AI.
“The data that’s going to be driving AI is, of course, that most valuable data that an organization has, and the last thing you want is for that to be exposed.” – Eric Hanselman, Chief Analyst at 451 Research, part of S&P Global Market Intelligence
SM: With security being a concern, do you see organizations moving away from SaaS models? Do you see more enterprises choosing to build and run their own AI tools or using SaaS AI tools?
EH: Well, we see a lot of the above, and it depends upon an organization’s level of AI maturity. I think initially many organizations want to be able to build and manage their tools themselves. The challenge of course, is they may not have the capabilities to be able to deliver that effectively, which then pushes them to SaaS-delivered models. But then they face some of the constraints in SaaS, which of course are concerns about security, data placement, sovereignty, which as they move up that level of maturity, they then will move back towards managing more of those AI capabilities and AI infrastructure themselves.
SM: Data security is clearly a huge challenge. What other challenges do organizations face when running AI workloads?
EH: Infrastructure is the biggest of those. Understanding where organizations can actually run AI workloads. And so much of that is a matter of establishing patterns that are different from what we typically implement, in terms of regular infrastructure; AI is fundamentally different. Organizations have to be able to better leverage the infrastructure capabilities they have and couple that with the infrastructure they put to work from partners. But that requires a level of skill in infrastructure management that most organizations haven’t gotten particularly good at. That’s probably one of the leading challenges that we see.
SM: That’s what we hear a lot from our customers, is not being able to put that complex infrastructure together to make sure that it all works seamlessly and it is secure for the business.
EH: It’s a huge part of the problem. [In our data, we see] infrastructure performance right behind budget and data; organizations think they can deliver on AI with the infrastructure that they’ve typically worked with without an understanding about what the real requirements are, the needs are.
SM: So moving on from infrastructure. What do you hear most from CISOs in regards to security?
EH: Well, the biggest challenges from the security perspective are ensuring against data loss. That’s a fundamental concern because we’re talking about the data that’s going to be driving AI is, of course, the most valuable data that an organization has, and the last thing you want is for that to be exposed.
So simply managing a lot of the data loss protection pieces from a CISO perspective is key, but there are also regulatory constraints about how that data is used, what’s actually in that data, from data privacy concerns to just fundamental personal and privacy information concerns that CISOs have to manage as well. Those are all areas that they need to ensure that the data they’re working with can be protected wherever it’s going to be used, and that they can manage, again, that data pipeline piece that’s so critical to AI success.
SM: We’ve heard about the EU AI act, and how GDPR is being expanded to include AI components and the fines associated with that, which would be pretty scary to me if I was a company venturing off into this AI world. Are there any other regulations that you know of that companies need to be worried about?
EH: I think it’s safe to assume that there are going to be regulatory constraints coming from all reasonable geographies around this. So rather than tying to any specific region, it’s important that organizations really think about data protections that are going to serve them globally. And fundamentally … it should be a question of ensuring that you’re protecting data and that you can then meet regulatory and compliance mandates based on the protections you have in place, as opposed to trying to meet individual regulatory requirements.
SM: And I know that, for example the Executive Order that was just issued, regulations are focused on trust and privacy and security and safety of AI and making sure that you’re not putting somebody’s personal information out into the wild. I think it’s safe to say that regulations will be passed around the world to ensure companies comply with personally identifiable information remaining private.
EH: Absolutely. The challenge is ensuring that you’ve got an understanding of the data that you have. One of the things that organizations have been historically really bad at is data classification. And so much of this is understanding what the data assets are that you actually have.
So often IT has simply been the repository for data without an understanding about what the nature of the data is that they’re actually handling. So the classification part is one of those things that we’ve really got to get our hands around, because without knowing what the data is you have, it’s all that much more difficult to understand what kind of protections you have to put in place, how you can mitigate risks about its use, and on the other side of that, how you can use it effectively and what you can do with it in order to actually leverage it. Again, if there are anonymization paths, if there are other pieces that could help you get beyond some of those privacy concerns, you need to have those pieces built into that data management pipeline to manage it effectively.
SM: Absolutely. How important do you think it is to be able to customize AI solutions for different enterprises?
EH: Well… it’s important to be able to build AI infrastructure that’s going to suit the use cases for which it’s being applied. The challenge though, is that within an individual enterprise, it can be easy to assume that what you need is going to be special and unique. But when we think about AI approaches, certainly there are going to be industry specific approaches, certainly there are going to be classes of use cases that are going to need particular types of infrastructure to support them. But it’s also something that you want to be able to come to the table with an expectation that you have to build a platform that’s going to be able to support a number of different use cases over time.
The last thing you want is to get locked into a very specifically crafted set of capabilities that are going to prevent you from using more sophisticated or just simply different approaches in your environment. So it’s one of those things that there has to be reasonable balance. Yes, you want to be able to optimize, but you also don’t want to get locked into architectures that will potentially limit your future use.
SM: And that’s one of the things that I really like about [SUSE AI], is that we’re providing that choice of architecture, we’re providing that choice of LLM, we’re providing the flexibility for you to choose whatever AI tools you want to use. So the value proposition to a company is that they can ultimately future proof their AI platform because we don’t know what’s going to happen next year. We don’t know what tools are going to be available next year or in six months or next month for that matter.
EH: And organizations are still just getting started on all this. One of the things we saw in a study this spring was that we were asking about key values in infrastructure. Number two behind security was openness. I think enterprises have gotten to a point at which they expected the move to cloud to be something that would give them flexibility and portability. And yet they found in many cases that there were ties and locks to particular service capabilities within individual cloud providers that limited their flexibility. And so they’re now coming to AI with a desire to ensure that they really do have that level of openness, because the one thing that they can count on is that what they’re doing today is going to change in the future and need the flexibility to be able to get there.
“The last thing you want is to get locked into a very specifically crafted set of capabilities that are going to prevent you from using more sophisticated or just simply different approaches in your environment.” – Eric Hanselman, Chief Analyst at 451 Research, part of S&P Global Market Intelligence
SM: Final question Eric. What do you think the future holds for AI? What are you seeing? If you could look into your magic crystal ball: When we sit here next year, what do you think we’re looking at?
EH: So it’s a combination of a couple of dynamics that we already see starting to play out. One of the first of those is organizations looking to really manage cost, and some of that is optimization. Some of that is use case selection. So the first piece of this is organizations starting to rationalize what’s working for them and what’s not. That’s going to lead to a certain amount of pullback in experimentation. We already see some of that today, and some of that’s going to lead to the use of integrated AI capabilities delivered as part of products, part of services already.
The next piece of that is getting better at data management. I’ve been harping on this as a fundamental skill, but it’s really getting to an understanding about what really managing data looks like and understanding what an organization’s data assets are. Most organizations haven’t really gotten to relatively high levels of digitization, and it’s being able to get to the point at which they understand how they can grasp more of the information that’s already in their organizations… We’ve seen it in the infrastructure side in a lot of the observability paths that are out there, but organizations will be moving more towards that as a business practice.
Those are the kinds of things that in this next stage of being able to understand where that information is that they can actually put to work that they’re not putting to work today. There’s a lot of those next stages.
“[Companies are] now coming to AI with a desire to ensure that they really do have that level of openness, because the one thing that they can count on is that what they’re doing today is going to change in the future and need the flexibility to be able to get there.” – Eric Hanselman, Chief Analyst at 451 Research, part of S&P Global Market Intelligence
Learn More
If you’re ready to embrace the potential of AI while overcoming the challenges discussed here, learn more about SUSE AI: a secure, private AI platform to deploy and run GenAI solutions for any AI application.
Watch our on-demand webinar to dive deeper into how you can securely leverage the power of AI with SUSE.
Related Articles
Aug 01st, 2024
Running AI locally
Jul 03rd, 2024