Paul Langston

Podcast

15 May, 2026

The Data Movement

Episode 02: Iman Anvari

Paul Langston

Podcast

In this episode of The Data Movement podcast, we talk with Iman Anvari, director of advanced technology at Seagate, to unpack edge AI, robotics and why context is the true bottleneck for autonomous agents.

Table of Contents

Why context will define agentic AI at the edge

Listen to episode

In this episode of The Data Movement, host Paul Langston sits down with Iman Anvari, director of advanced technology at Seagate, to explore what’s really coming next for AI — and what it will take to make autonomous systems work outside the lab.

Drawing on insights from NVIDIA’s GTC and Seagate’s hands‑on innovation work, Iman shares why the industry has reached a moment of real change. AI models are getting smarter — but capability alone isn’t enough. The challenge, he points out, is context: understanding what’s happening, where it’s happening and how to respond.

That’s where the edge comes in.

The conversation explores how robotics brings together hardware, software, and data infrastructure in a way few workloads can — and why latency is the hard constraint that moves compute out of centralized clouds and closer to where data is generated. When decisions have to happen in real time, relying on cloud round trips just doesn’t scale.

Paul and Iman unpack why autonomous systems need more than raw compute. They rely on layered memory and storage working together to deliver real‑time awareness and continuous learning. 

In this episode, you’ll hear about:

  • Why robotics brings compute, storage and real‑world data together
  • How edge “micro data centers” make AI automation practical
  • Why latency is the key driver pushing intelligence to the edge
  • How context becomes the constraint for autonomous agents
  • The tiered memory architectures required to support agentic AI at scale

Iman delivers a clear, systems‑level perspective on what it takes to move agentic AI from concept to reality. For teams building at the edge, it’s a reminder that progress isn’t just about more capable models — it’s about putting intelligence closer to the action, with the right data infrastructure to support it.

Iman Anvari
Iman Anvari
Director of Advanced Technology | Seagate

Transcript

Paul: Welcome to The Data Movement. I’m Paul Langston, and on this episode, I’m talking to Iman Anvari, the director of advanced technology here at Seagate. We’re going to be diving into context, memory and storage for enterprise AI systems. Let’s get into it.

Iman, welcome to The Data Movement, sir. You're one of my go-to guys when I have a question about what’s going on in the industry around us.

So, I’m super excited to spend some time with you today. Why don’t you give me a really quick recap around your journey at Seagate and where you’ve come from and what you’re focused on today?

Iman: Yeah, I mean, Seagate has been amazing.

I’ve been very lucky with my career at Seagate. Couldn’t ask for more. I’ve been here for more than a decade now. So many different roles, and in each one of them, I learned something new. I started as a solutions architect or a systems engineer back then and you work on many different projects, just building exabyte scale storage for other companies.

It’s one of those challenges that you learn so much, not just on the technical side, but also from a business side ... and everything around it. And so, I spent quite a bit of time there. Then, I built and managed a team for a while. But you know, I’m a product guy, and I’m very passionate about building and innovating products, so that kind of naturally got me into product management for a while.

And along the same route, in the last couple of years, I’ve been running the advanced technology team, which for me, is my dream job. We’re focusing on what’s coming next ... in three, five, seven years ... and really trying to figure out how do we innovate and position storage and Seagate for what’s coming.

And it’s so much thinking, so much researching, so much working with smart folks like yourself. So, I’m just happy to be here and learning. 

Paul: That horizon is so interesting — the five to seven-year kind of horizon—as you’re thinking about technology and how things are evolving and the pace that things are evolving, especially at this moment.

It feels like something seismic, something big is happening in the technology space at the moment. How do you think about that? 

Iman: Back when I had hair and went to grad school, I was focusing on robots and autonomous vehicles.

And I was dabbling with neural networks back then. When  reflect on this every few days actually, which might be too much, but it’s very interesting because I look back at all of that technology being talked about. I think neural nets actually have been around for more than maybe 100 years now, from a concept perspective.

But they were not production ready. They were used in some use cases, but they were not really something that you would think of putting in a robot. And now you get up every day — and I'm very passionate about robots with automation and autonomy— and I see that this is becoming a reality.

It’s something that I thought would be happening maybe 30 years from now. Now it’s, like, happening and progressing every day. I feel like we’re at this inflection point that our lives are not going to be the same ... before and after this period of time. Even for me, I’m trying to stay on top.

And I think a lot of folks do the same, but just being able to see where this technology goes and being able to shape it, that’s what gets me excited, and nervous sometimes even. 

Paul: The robotics thing is really interesting at the moment, especially coming out of GTC. You were there, right?

Iman: Yes. 

Paul: What were some of the emerging trends or things that you learned at that show about robotics?

Iman: I usually think about these things in architecture. So, what’s underneath and what’s the end goal? And, for me, I feel like the ultimate end goal is to have a robot that can actually fold laundry for us. 

Paul: That’s the dream. Pair socks as well, especially when you have kids.

Iman: There you go. I say that as a jest, but at the same time, it’s truly a hard goal for a robot, right? To have a robot in your home that you trust, and it can do things autonomously, and it can be flexible enough that it can go fold laundry. And I’m giving this example, but there are many different things about robots or different use cases, I would say.

Back to GTC, there were two tiers that really focused on robotics and robots. One is the fundamental hardware. You need the hardware that can do things. And I think over the past 20 years, if you look at companies like Boston Dynamics, they’ve spent a lot of research time on making it work ... making the body of the robot a humanoid work. And I think we’re closer than ever to having a robot that can be programmed to do very meaningful things. But the other part of this is the autonomy, and I think that’s where the whole agentic AI comes in.

And when I reflect back on GTC, you could see that the focus was agentic AI. How do you deploy this at different scales and in different places? And what do you need to get there? But you could already see so many robotic companies becoming more of a reality, showing demos that actually worked.

Even surgery robots. Surgery robots today ... if you look at the Da Vinci robot, it’s designed for precision, because you don’t want surgery to go wrong. But at the same time, the the idea is that maybe you have robot- or AI-assisted surgeries. And they were already showcasing some of this.

I mean, it’s just mind-blowing to see where we’re going and what’s happening. It just gets me so excited. For me, robotics is the center of all of this because it’s were hardware, software and infrastructure come together in that one package. 

Paul: The robotics thing’s really interesting because you have a bunch of different technologies converging in order to be able to execute that, and one of them is sensors, right? Video sensors.

Video sensors it's ingesting in order to understand the context of its environment. We talk about data on this show. Video is, you know, just a really big data stream ... that the computing architecture ... that underpins the robot has to ingest and process and make sense of and store. How do you think about that? 

Iman: Any system that has to run autonomously — which a robot is like the best example of — it needs all the inputs. We have video, we have radar, lidar, right? Different data points that all come together.

And a great example of this is an autonomous vehicle. An autonomous vehicle is technically a robot that’s ingesting terabytes of data per hour just to be able to make decisions on the fly. So as somebody who’s been in the data industry for many years, video has always been at the forefront of data capacity and data streams. That’s always the challenge you try to solve because it’s so big, and the quality keeps going up, and the number of cameras keeps going up. So, when I think about robotics, I think that’s actually one of the biggest problems that a robotics engineer now has to solve. You have all this video stream coming in. What do you do with it? Where do you process it? Where do you make decisions? Where do you save it? In my mind at least and this is maybe where I’ve been trained to do this, but when I think about these architectures, it all comes down to that data infrastructure and how do you approach it for that specific problem. 

Paul: Yeah, it’s like the mechanics of the robot; the physical aspect of the robot is one engineering challenge. Of course. The other engineering challenge to me seems to be the data and the computing and storage stack underneath it. It seems like a huge challenge when delivering at a scale. So, I go back to our sock pairing or laundry robot in your home ... the physical assistant at some point, which seems like an inevitability. How does the computing architecture catch up to that level of scale? What’s your perspective on what needs to evolve? 

Iman: I’ve been thinking about how would robots work in an environment, and there’s this new term being used called cobots ... collaborative robots. And it kind of makes sense, right? You have robots that maybe some of them are experts in certain things they do, and as you mentioned, that could be logical or that could be physical.

Maybe you have a robot. Now, this might not be in your home necessarily, but on a factory floor that you have different needs. You have a robot that can deal with conveyor belt stuff and a robot that can actually go and grab something from a container, if you will. So, there are two different use cases.

For one, maybe you need the arm. For the other one, you need the suction cup. So that’s the mechanical side of it. But then how do you connect all of this together? How do you make sure a cobot is this entity that can think together and work together ... and achieve that goal?

And I’ve been thinking about the architecture and the deployment model, and I think this is where the trade-offs become real. Like, do you fit everything in one all-can-do robot, or do you have many specialist robots with this brain — which I call an edge micro data center — that has your infrastructure. You have GPUs in there, you have CPUs, you have compute storage, and then the data from all of this gets there, gets processed locally (still pretty fast), and gets sent to the robots. So, you have a level of autonomy within those robots, and then a layer on a second level of autonomy on that edge micro data center. And I truly believe that’s going to be the architecture that’s going to work at least in the next few years. When you look at the trajectory of the hardware, I think more and more is becoming smaller or shrinking down to fit inside a robot.

But it doesn’t necessarily mean that’s enough to achieve a certain goal. Again, it becomes an infrastructure problem that you have to solve ... and figure out the trade-offs, weights, power, all of that.

Paul: Tell me more about the micro data center. In my head it conjures up a certain image, which is probably not accurate, but it’s about proximity, right? It’s proximity to where the data is being put, streamed, processed, as opposed to a massive centralized exabyte scale data center. Unpack the problem with a hyperlocal robotic application like the one we were just describing, and then the infrastructure needed to support it. 

Iman: I can actually give you a silly example. This is something that I’ve been trying at my own house. And it’s applicable to any of these architectures. I had an old robot and it doesn’t have a lot of compute on it.

It’s pretty old and I was thinking, how do I actually make this robot agentic, where it can think and do things? And I didn’t have a crazy end goal. For me, the end goal was can I keep my cats pretty entertained with this robot. But I did want it to be more autonomous.

It was a good way for me to try some of the new technologies we’ve been working on, actually. So, over the past couple of years, really my focus has been edge AI and bringing everything down ... closer to the edge. And as you mentioned, it becomes very important when you have to make decisions fast.

Because physics is still physics, right? If you need to send something even through fiber to the cloud, you have to pay for that round trip, and it needs to go there and come back. Which is fine ... it works for certain use cases. But for something like robots, you probably don’t want that round trip latency.

So, you want it to be close to the robot, to the user, and you want the decision to happen very fast. For example, in our own factory, we have a rule of two seconds, that if you need to make a decision within two seconds, you probably don’t send it to cloud and back. You have to make that decision locally.

Back to the example, we’ve been working on edge AI devices, where you can bring agentic AI to the edge and be able to load it and make it run without any internet connection. Now, you could connect it to the internet if you wanted to. We don’t stop you from doing that.

But the idea is that if power goes out, you have backup power. If the internet goes out, you’re not reliant on that. So, this idea of micro edge data center is that you have these building blocks, which I call edge AI devices. And you could use one of them, five of them, 10 of them. You’re not building a full data center, but you still have a small building block that you could host your agents on.

Paul: Is this in the home, by the way? Or is it less localized?

Iman: Yeah, right now it’s in my garage. I’m actually hosting it in my home, but it could be anywhere. I’m using one building block because that’s enough for my silly use case. But in a factory environment or industrial environment, you might use five of them. It’s really in a spectrum. When I think about edge, it’s in a spectrum. It could be your house, or it could be a factory, or it could be a farm. But the whole idea is that it’s far away from a big data center and you need to make decisions fast.

And this is where edge AI and the edge micro data center idea come into play ... where you have many sensors, you have many robots and they all talk to this mothership of sorts. And they can make decisions. 

Paul: But it’s interesting because, when you look at the trend of computing architectures, and we talk about this a lot at Seagate: the cloud took a lot of computing tasks and centralized them.

And then what you’re talking about is decentralizing computing ... to the extreme.

Iman: Yes, exactly. I see this as a cycle that we go through. Like server and client. If you look at the past 20 years, uh, we go through this cycle that on-prem becomes powerful, and you could do more with it. So, you do, because it’s more deterministic... it’s your own environment.’

But I think, generally the architecture has always been hybrid. It’s just a matter of the availability of resources. I think every time you see a hardware innovation happen, you have more to do on-prem, and we take advantage of it. And there are times that becomes not enough, and you start pushing that workload to the cloud.

In my mind, it’s all going to be hybrid. It’s going to be hybrid, edge and cloud, and hybrid storage. And we talk about this a lot, like how storage layers and tiers matter to the memory of those agents and the context of those agents. There’s no one size fits all. At least that’s the way I see it.

I feel like all these use cases will have a sweet spot, and then they will evolve in the next few years. 

Paul: But, for the robotics use case specifically — because of the data life cycle and where the processing needs to happen for that specific use case — that hybrid model is so critical, right?

In managing the data across the life cycle. You just mentioned context, right? Can you unpack what that means for us? 

Iman: That’s my favorite topic nowadays. Me and you, we talk about this quite a bit. 

Paul: Yeah, we talk about this a lot.

Iman: Yes. It’s very relevant. Even at GTC, you could see context becoming a first-tier discussion now. Last year, it was something that you needed to think about, and when it comes to large language models and agentic AI, it is very important. But now it’s actually becoming the bottleneck for AI agents and autonomy to become real.

Context, if I want to simplify it, the best way is to use a human analogy. At the end of the day, agents are trying to mimic real humans from a thinking and acting perspective. And they need to have a good memory because even if you have the best engineer in the world or the best partner in the world, if it doesn’t have a good memory, it’s not going to do a good job.

That really comes down to the context and memory of agents ... it is what enables them to go from just good to this amazing autonomy. And it’s inherently a storage problem because that context needs to be stored somewhere. It needs to be accessible and it needs to be searchable. And that agent should be able to look at that and find the information it needs in as little time as possible. Putting all of that together, it means that if you have the right context and the right amount of context, then your agent is going to do amazing things. But if you feed it garbage, it’s just going to give you garbage, right?

Paul: And it’s so interesting thinking about this because more and more we’re using chat, chatbots and agents. There’s a high degree of human intervention, like the human is prompting the experience with the AI. You can see first hand when you’re prompting and you’ve already told it something and it kind of forgets, or you know there's an issue with the user experience ... you can see it.

Iman: Yes. 

Paul: And then you can kind of course correct it because you’re managing the interaction. That’s how I visualize this idea context. But then when you think about agentic AI — where you’re trusting the AI and there’s minimal or no human intervention.

And you think about doing that at scale, and then you’re handing over operational tasks in an enterprise. Potentially mission critical enterprise tasks. Then the idea of context and inherent data is so critical.

Iman: The way I think about it is that at the end of the day, large language models, which are the core of agentic AI... that’s still the brain. They’re stochastic. They’re by design, not deterministic. So, as you mentioned, if you want to delegate work to them (and even humans are probably stochastics), but if you want to delegate, you want to delegate it to somebody that you know they will do a good job 9 out of 10 times.

And of course, the more critical that task is, you want that percentage to go higher. Now, there are different ways to ground and control these agents without being there and chatting with them. And you mentioned, as you’re using a chatbot, you could totally see. If you give it bad information, it goes off the rails, and sometimes you have to even start over, right? You have to remove everything. 

Paul: Yeah. Orit’s trying to synthesize ... it’s trying to fill in the gaps. Like where it’s missing context ... it’s trying to do its best to fill in the gaps. Because I guess it doesn’t want to go further back. This is my assumption. There must be some parameters that have been set in the backend, because it has the data. Because I’ve fed it the data before, so either it’s just not going back far enough in time to go and retrieve that because of some cost parameters, or something that’s going on that’s preventing it.

Iman: That’s an interesting one.  Yeah. That’s actually a known problem with LLM. There’s a benchmark called Haystack, and the whole idea is can you find that needle in a haystack. If you give it a lot of stuff, can you go back and find that specific thing that you asked 20 minutes ago, two hours ago?

And if you look at the new models, the LLMs, there actually is quite a bit of focus on making this better and getting better at finding that. But even if that happens, that only applies to the active context. The hot one that’s in memory and it’s sitting there.

But what if that agent has to think about what happened two weeks ago? That’s not going to be there. And that’s where this whole idea of this tiered context where you have a short-term memory, just like humans, and a long-term memory. The short-term memory is right there; you just have to get better at retrieving it, and then this long-term memory is not there. You have to dig for it. You have to spend energy. Even when I need to think about something from 20 years ago, I have to close my eyes, and then as the older I’m getting, it’s getting harder. But I have to dig for it and go back, and really try to find that information.

So, the exact same architecture applies to agents. The question becomes, how do we, as an industry, make it easier for these agents to retrieve that data and make that data available to them? And I think that’s like the baseline. As a storage company and a storage guy, I see so much data being thrown out that we always think about. Like in past 10 years, I think me and you probably talked about this more than 10 times ... the data that you’re throwing out today might be super beneficial in five years. You just don’t know about it, right? And that’s like the baseline, and then how do you make that searchable and indexable and easy for that agent to retrieve? That’s the problem everybody’s trying to solve. 

Paul: Yeah. I even think like five years, yes, true. I agree. But also, two weeks or a month. 

Iman: Yes. 

Paul: Right? For sure. If you have this kind of long context window with lots of turns and you’re taking in the workflow that you’re interacting with the AI, even a piece of data or piece of context from a month ago ... if the underlying architecture isn’t built to enable retrieval of that data, then that’s going to have an impact on outcome. And so, it strikes me that this is fundamentally shifting the way data gets tiered or needs to get tiered, but it strikes me that storage is becoming more active because of this workflow.

And the lines between the way the industry defines memory components of that tier and the way that the industry has defined storage, and that kind of tier within the architecture are becoming increasingly blurred. 

Iman: They are, yeah, absolutely. Yeah. I feel like the general storage pyramid, which starts with the fast and small storage at the top, and then very slow and big storage to the bottom still applies, but as you mentioned, the workloads are changing. We’re going from just these applications that they would query the storage, to these long-living agents. They’re running, they’re thinking, they’re deciding what they need in the moment. We don’t tell them, “Go fetch this data.” I mean, we might if it’s a human in the loop, right?

But if you’re doing a full autonomous loop, they decide. They might even talk to each other, agent to agent, and decide what information they need, which is fascinating. And the other part of this is that nobody really 100% understands how LLMs still work, because by design, they’re like neural nets with an attention layer, so they’re black boxes.

And you see even Anthropic is doing research on how they behave, how do they decide to respond to you. So, we don’t know, and when we don’t know, we don’t know how to design for it, so it becomes this chicken and the egg problem of how do you give them the memory they need? I was thinking about this a couple of days ago, Paul, I’m like, it’s a simple problem. You have to give them the context they need when they want it. That’s it. But how do you achieve that? That’s really the big problem. Like, how do you tier it up and down, and how do you save everything, and how do you index it?

What formats do you need? All of that, I think those are the constraints for a storage and memory design for agents

Paul: It strikes me that the way that things have played out with large language model development is there are a few companies that have led the charge. At the frontier of large, true large language models. And so you mentioned earlier, when it comes to agentic AI, agentic systems — automated systems in the enterprise — it strikes me that those language models will be licensed rather than developed ... Maybe licensed for adaptation for specific use cases and industry workflows in the enterprise.

And so how those get deployed at scale, in those very specific agent use cases — IT service desk, for example. A lot of the kind of uniqueness of how that agent gets activated and deployed is based on the context, right? The proprietary data sets that the agent and the LLM adaptation are fed. 

Iman: I would argue that the customization of the personality of agents primarily comes from the context. So, if you think about LLMs ... nowadays we talk about harness engineering. What is a harness? A harness is everything that goes around the LLM to make it behave a certain way.

So, it’s the system prompt, it’s the tools that it can use, it’s the guardrails that it has, all the skills and Claude Code, for example. These are all part of a harness that somebody builds to make an LLM an agent that actually achieves a goal for you. Because that’s what we want.

I mean, everything we do today is cool, but unless it’s solving a problem for you and it’s delivering that return on investment long term, it’s just a toy or a demo, right? To turn it into a product, you really need to show that ROI and measure it. And in order to get there, this agent has to be customized, and context is the first thing you start with.

You give it the system prompt. You tell it what tools it has access to, where it lives. Is it in a factory or in a home? What are you trying to achieve? What are your goals? So, it’s really the core. That memory is what makes that LLM an agent and a personality, and in my mind, that becomes one of the most fundamental ...

I mean, it is a problem, right? Like, in a sense that you want it to make it as easy as possible for that agent to access that context and make it as relevant as possible so that it actually does what you want. So, solving that problem becomes a core focus for a lot of folks, especially if you’re not just building LLMs.

So, the big frontier models, they’re going to keep building amazing AI, and we’re going to use them. But if you want to use it in a factory setting, how do we go from a generic model to a very vertical specific agent? 

Paul: If I work for an enterprise organization today, and I’m listening to this and I’m thinking about my AI strategy long term, what are some of the things I should be thinking about as I map out that roadmap? 

Iman: The way I would approach that or answer that is that we got to go back to fundamentals. We know things are going to change. We know LLMs are going to get better.

Hardware is probably going to get better. But the core problem remains the same, that if you don’t have the right data and the right context, you’re still going to have a problem. So, if I was building an enterprise or thinking about my IT strategy or enterprise strategy in the next few years, my focus would be, how do I collect relevant data. keep it clean.

I mean, clean data is very important. Actually, that’s one of the hardest things, right? It’s something that people don’t love talking about because it’s not glamorous, but making sure that your data is clean, has the right metadata, it’s searchable and easily accessible. And then you have this strategy of infrastructure that you could modularly plug in new types of databases and storage. For example, today we focus a lot on graph database and vector database, two things that the agents use to grab information into their context. We didn’t talk about these two years ago. So two years from now could be completely different, but if you have good data that can be transformed and you have a modular architecture and infrastructure, then it’s easy to adapt and enable your agents and the future of agentic AI in your environment.

And I truly think that is the future. I mean, we are at a point that, even personally, if I don’t have five different agents doing something at any single time, I feel nervous. I’m like, “I’m underutilizing my agentic platforms.” 

Paul: What are your agentic workflows that you have working for you today?

Iman: Yeah. I have one agent, as I mentioned, that I attach to my robot that’s for the home stuff, and it walks around and whenever it sees one of my cats, takes a picture and sends it to me. That’s just completely goofy, but you know ... it’s a good way of playing with robots.

But for work, I have a research agent that tries to keep up with everything that’s going on with edge AI. So, I continuously run it. It looks for new sources, things that are going to the industry, and it will create reports and update me every day, and tell me if there’s something that is very important I need to reach out about.

I use agents for all my scheduling now, which is great. That was, like, one of the pain points. We would get on a call and you’re like, “Okay, can I do this?” Now I use Copilot and I’m like, “Okay, just go and find these, block some of my calendar,” and it does it every day. It’s amazing. I could see in our own organization how you go from just a chatbot to something that can actually do things for you.

And for me, that was interesting, where with Copilot initially you just had a chatbot and you’re like, “Wow, this doesn’t really do much.” But now it has access to all my data and it’s amazing. I basically tell it to go, like if an email comes in and they’re looking for a specific timeframe to meet, I don’t want them to wait for me to check my email and figure it out.

I have certain times allocated, and it goes and makes sure that the topic is relevant. I set something in its context to say, “Okay, for these things, these are hot topics, schedule it right away.” And it’ll go and find a time that works for me and my team, and it will schedule it. And it just lets me know.

And it’s amazing. I used to spend so much time on this, and I absolutely hated it, and I always felt like, I could have a personal admin do this, right? And now my agents can do this. I think the most common one nowadays ... actually on LinkedIn, I put a post asking people what they use agents for today. Before coming here, I wanted to have that answer. 

Paul: I did your poll, by the way. 

Iman: All right. Great. We got a few answers, but what was interesting was that it was a 50/50 split between research and coding. So, you could see where we are today and I think that's the two areas that most people are using it for. I want to do the same poll maybe six months from now and see how that’s evolved. 

Paul: It’s relatively new, so I’m still, refining it and curating exactly what I want to see and filter out some of the stuff I don’t. Of course, because if it’s too much, then there’s too much to pass through. 

Iman: Right. Yeah. Too verbose sometimes. And you still have to steer them, right? I feel like that. But that’s where, again, building context, a personal context for you, for example, as you look at these behaviors and say, “Next time, don’t do this.” And if that works well and the LLM grounds itself into that context, then suddenly you have this amazing agent that does a really good job for you. 

Paul: Yeah. Very cool. Iman, it’s been such a fun, interesting conversation. Do you have any questions I didn’t ask you that you wish I’d asked you?

Iman: I want to do a shameless plug about the work we’re doing. We just showed our proof of concept for our edge AI box, and we call it Edge AI Pro. And we’ve been working on this for a couple of years. And I say ‘we,’ but I probably have done the least amount of the work. Most of the work happens in our amazing research group.

And we’ve been really focusing and watching where the agents are going, and we finally got the proof of concept out in public at GTC and showed it to people. And it’s always nerve-wracking, right? When you’re bringing a product like that out, and you’re talking about it. You’re worried about bad feedback, but of course you want to understand it. But I was so surprised because we had so many people coming in with interest and different use cases. Some robotics, some smart cities, some were just doing home automation, that they wanted something similar.

I’m not going to say exactly that, but they would probably buy and take that today if they could. So, the interesting thing about it is that we actually have deployed this internally in our own factories, and we already have seen really good results. But now we want to go after a wider audience, a more agentic approach.

And what’s novel about it is that we’re bringing everything together in a very small portable box. So, you have the AI, you have the NVIDIA chip, but we married it with multi-tier storage. And this multi-tier storage in a small box is really hard to do. It’s easy to do it if you’re in a data center because you can scale and do different things, and get very creative. But on a smaller scale, you only have a few knobs you can turn. And you’re dealing with a lot of trade-offs: power, cooling, heating, all of that. I think we finally hit a sweet spot where you could have hundreds of terabytes of storage and context memory, plus thousands of teraflops of compute married together.

And what that really means is that you could run fully autonomous agents. And actually, that’s where I’m running my robot agents now. So fully autonomous agents. 

Paul: What you’re describing is the micro data center, right? 

Iman: Yeah. Absolutely. It’s like a tiny — I mean it’s even smaller than a PC — but it’s technically a full data center that you could run an agent on, and it connects to the cloud if you need it to, but you don’t have to. It’s just completely secure. I'm super passionate and excited about it, so I had to do a shameless plug here. 

Paul: Yeah. I love the cat feeding application for it as well. And you can see some of this on your LinkedIn, right?

Iman: Yes. I’ll probably be posting some pictures of that. But it’s amazing. I mean, I was able to revive a very old robot. This is an Anki Vector. It was a startup many years ago. Some smart folks started it. But it doesn’t have a lot of local compute. But now I can offload this intelligence of the agent to the micro data center. So somehow my robot is alive now and it can think, and it’s just fascinating to see how easy it was to get here. 

Paul: All right, Iman. On The Data Movement, we do a five-question lightning round. So, let’s just get through these really fast. I’m really curious about your responses. Question number one, what is the one AI tool or technology that you wouldn’t or couldn’t do your job without?

Iman: Codex CLI, that’s where I live and breathe now. Yeah, that’s the one. 

Paul: Okay. Question two, are we heading towards a world with fewer, more powerful language models, or lots of smaller, more specialized ones? 

Iman: Ooh, I think it’s going to be a mix of both. Both more powerful centralized one and a lot of more specialized ones.

Paul: Question number three, is the edge becoming more important because of AI, or is AI becoming more important because of the edge?

Iman: I think the edge is becoming more important because of AI, because now you can actually do autonomous things at the edge, which you weren’t able to do even two years ago.

Paul: Question number four, what is the biggest unlock for AI? Better models or richer context? 

Iman: Ooh, controversial. I’ll go with richer context. 

Paul: Nice. 

Iman: Yeah, I’ll get some heat on this, but ...

Paul: Yeah. The answer is obviously both, right? Yes. I like how you went there. Okay. And then question five, final question. Complete this sentence: The future of data is... 

Iman: Mind-blowing. 

Paul: Amazing. Iman, thank you so much. I learned a ton, as I always do from talking to you. But the reason I love talking to you is, you have all of these ideas and understanding of what’s going on, but then you’re also embodying it and doing it in real life, like feeding your cats with an AI agent.

So, it’s very, very cool and inspiring, and interesting, and I appreciate your time this morning. 

Iman: Thanks, Paul. Thank you so much for having me. Always good to talk to you. And, yeah, you know me. I have to get hands on to learn. So, for me, that’s the beauty of AI, nowadays, that it’s easier than ever for everybody to get hands on. Even robots are something that anybody can get into now. It’s not just for engineers anymore, and that’s just fascinating to see. 

Paul: Awesome. Thank you, sir. Appreciate you. 

Iman: Thank you, Paul. 

Paul: That’s it for this episode of The Data Movement, a podcast from Seagate. Thanks to Iman for joining us and thank you for listening.

Subscribe for more conversations about how data is moving the world forward.

Related Topics:

Innovation
Black and white photo of Paul Langston, Seagate senior director of brand and integrated marketing.
Paul Langston

Senior director, brand and integrated marketing