Women in Tech Development by Arti Raman of Portal26
Arti Raman, CEO of Portal26, discussing AI ethics and Women in Tech Development.

Empowering Women in Tech by Arti Raman of Portal26


Episode Overview

Episode Topic:

Welcome to an insightful episode of PayPod. We get into the critical intersection of AI ethics and Women in Tech Development with Arti Raman, CEO of Portal26. She provides an in-depth look at the responsibilities and challenges faced by tech leaders today. With AI rapidly advancing, the importance of ethical considerations and data security has never been more paramount. Arti shares her insights on the excitement and momentum driving AI innovations and emphasizes the need for proactive measures to ensure safe and responsible development. This conversation highlights the essential role women play in shaping the future of technology and the unique perspectives they bring to the table.

Lessons You’ll Learn:

Listeners will gain valuable lessons from Arti Raman on the significance of ethical AI development and the pivotal role of Women in Tech Development. Arti discusses the importance of implementing security measures and the responsibility of tech companies to prioritize data privacy. She also touches on the need for more women in leadership positions within the tech industry, providing practical advice for aspiring female tech professionals. By understanding the ethical implications of AI and the steps necessary to mitigate risks, listeners will be better equipped to navigate the evolving landscape of technology and contribute to a more inclusive and secure digital future.

About Our Guest:

Arti Raman is a trailblazer in the field of AI and a prominent advocate for Women in Tech Development. As the CEO of Portal26, she leads a company dedicated to providing businesses with robust visibility, security, and governance platforms to accelerate generative AI securely and responsibly. Arti’s career is marked by numerous accolades, including being named a Top Ten Women CEO Shaping the Future of Tech in 2024. Her leadership and vision have positioned Portal26 at the forefront of ethical AI development. In this episode, Arti shares her journey, challenges, and successes, offering inspiration and insights for listeners interested in the intersection of technology, security, and gender equality.

Topics Covered:

The episode covers a range of topics related to AI ethics and Women in Tech Development. Arti Raman discusses the current state of AI technology and the ethical considerations that must be addressed to ensure responsible development. The conversation explores the critical role of data security and the need for companies to implement robust measures to protect sensitive information. Additionally, Arti highlights the importance of increasing female representation in tech leadership positions and shares her thoughts on how to support and encourage more women to pursue careers in technology. The episode also delves into the challenges and opportunities that come with the rapid advancement of AI and the steps necessary to create a more inclusive and secure digital landscape.

Our Guest: Arti Raman – Visionary for Gender Equality Especially for Women in Tech Development.

Arti Raman is the founder and CEO of Portal26, a company renowned for its innovative approaches in Generative AI (GenAI) visibility, security, and governance. Formerly known as Titaniam, Portal26 has evolved to address the burgeoning needs of AI-powered enterprises. Arti’s journey into tech leadership began with her extensive background in data security and encryption. She holds several patents and has been instrumental in developing frameworks for the responsible use of AI within enterprises. Before founding Portal26, Arti served as a senior product leader at Symantec, where she led UX and competitive intelligence for the enterprise business​​​​.

Arti’s passion for technology and commitment to ethical AI development are evident in her career accomplishments and the mission of Portal26. The company’s platform offers comprehensive solutions to manage the risks associated with GenAI, ensuring that businesses can leverage AI technologies securely and responsibly. Arti’s leadership has not only driven Portal26 to achieve significant industry recognition but has also set a benchmark for AI governance and data protection. Her innovative approach combines real-time visibility into AI usage with robust security measures, providing enterprises with the tools needed to navigate the complexities of modern AI applications​​.

Beyond her professional achievements, Arti is a strong advocate for Women in Tech Development. She has been recognized as a trailblazer in the tech industry, receiving numerous accolades, including being named a Top Ten Women CEO Shaping the Future of Tech in 2024. Arti’s dedication to fostering gender equality in the tech world is reflected in her efforts to mentor and inspire the next generation of female leaders. Her personal experiences and professional insights make her a powerful voice in advocating for a more inclusive and equitable tech industry​​​​.

Episode Transcript

Arti Raman: In my opinion, I don’t think there is any malicious intent. I don’t think even there’s irresponsibility. I think it’s the excitement and the momentum that comes from something like this. But I think it is the responsibility of the folks that are putting this out to actively, proactively put the brakes on, and then do that part two. So essentially, I guess what I’m saying is I don’t think people were bad at doing it the way it happened, but I think they could have been good.

Kevin Rosenquist: Hey, welcome to PayPod, where we bring you conversations with the trailblazers shaping the future of payments and fintech. My name is Kevin Rosenquist. Thanks for listening. Arti Raman has had an incredible career in business and her company, Portal26, provides businesses with a visibility, security, and governance platform that helps them accelerate generative AI securely and responsibly. Given the rapid adoption of AI around the globe, keeping data secure and knowing how models are being used within your company’s walls is more important than ever. Our conversation includes the ethics of AI, data security, and the need for action to encourage and demand equality for women in the business world. I hope you enjoy listening as much as I enjoy talking with her. Now, Arti Raman, I’m fascinated with the ethics of generative AI development and the idea of the technology being safe and created responsibly. Without naming names in the general landscape of AI, is it being developed irresponsibly or even dangerously by some companies?

Arti Raman: I would say that, in the realm of technology development, we typically have security take last place. I think in AI it’s no different. I think that the promise and the excitement of having models and artificial intelligence be able to absorb, communicate, and generate information, it was so exciting, that I don’t think ethical or secure development was part of it at all. I also feel like sometimes the releases happen, circumstances come together and before you know it, the thing is out and it’s out there and it’s being adopted and accelerated before you add much time. I think that’s exactly what happened with AI. It’s a short answer, but we can go deeper into it if you like.

Kevin Rosenquist: Yes. So is it the desire to be first? Is that why so much gets deployed before there are security measures put in place?

Arti Raman: No, I don’t think so. I think that developing technology that can materially change our experience of the world is pretty exciting. The core technology, the process, the algorithm, all of that is the heart of the development. That is where you have to start. The ethics etc. come in how models are trained, what data is used, whether the algorithms are transparent, and things like that, and those are always second-order considerations. That’s how it should be. The question is when the excitement of that technology breakthrough is happening, do we pause, or do we put the brakes on until step two is done properly? I think in this case, as in many other cases, the excitement takes over. So in my opinion, I don’t think there is any malicious intent. I don’t think even there’s irresponsibility. I think it’s the excitement and the momentum that comes from something like this. But I think it is the responsibility of the folks that are putting this out to actively, proactively put the brakes on, and then do that part two. So essentially, I guess what I’m saying is I don’t think people were bad at doing it the way it happened, but I think they could have been good had they actively put the brakes on and done the security bits that said.

Kevin Rosenquist: Yes, totally does. I don’t remember, what it was now, maybe a year ago when that open letter came out from Elon Musk and I think Tim Cook Some other people were on it, saying, we need to hit pause and figure out what to do next. Where did you land on that letter? Were you on the side of them, or do you think that was kind of a ridiculous thing? It kind of went away.

Arti Raman: I was on the side of them. I was on that’s exactly what I meant. We’ve proved the technology. Now we need to put the brakes on because along with all the constructive power, there’s a ton of destructive power. We need to figure out what the guardrails are. We need to figure out how it could be misused. We need to try to block that, prevent that, and think it through. So I was definitely in line. Now, what was incredible about that letter was the names against that. The visibility of the whole thing was quite entertaining among other things. But I do think that the crux of it was very real. I supported that.

Kevin Rosenquist: Heart was in the right place, but it’s always tough with certain people too, knowing what their motives are. So you always have to think of that.

Arti Raman: For sure, for sure.

Kevin Rosenquist: What can individuals do to keep their data safe as this quickly changing world of AI ventures on?

Arti Raman: I don’t think anything. I don’t think individuals can do anything. What I think is that what individuals can do is educate themselves so they don’t put themselves at risk, so that their voices are heard when they call for either regulation or guardrails or responsibility or accountability and things like that. I think individuals can do that. But I think as far as the ethics of building models securely and building models ethically, responsibility is 100% on the builders of the model. I think the data is already out there. So it’s not like I can do anything about my data being used or not being used to train a model. There’s nothing I can do. That’s out there, it’s been out there for a long time. We know, we still have all the data breaches we’ve been having. So all those risks exist. I think that’s something that can’t be undone. So I do think the responsibility is squarely on those organizations. It’s the voices. It’s our voices that we can use. Not a wonderful answer. But that’s reality.

Kevin Rosenquist: No, I think you hit it on the head. It’s an honest answer and I appreciate that. I think it’s easy for people to go, well, you can do but no, you’re right. There probably isn’t much we can do. I like to follow AI, I’m very intrigued by it and I follow like newsletters and stuff. I am always seeing new products, a lot of them are like, okay, I didn’t know we needed that. But people are sort of developing, developing, developing super fast. The concern is, could be that those people are developing AI maybe too quickly without doing the right things, but they’re also probably building on other people’s larger products like OpenAI and stuff like that. So are those smaller players trying to get out there in the tech world, do they pose a bigger risk?

Arti Raman: I think those people can help with the questions that you are raising. So, for example, now that we have a good understanding, well, we have some understanding. I wouldn’t say good, but we have some understanding of what the model risks might be. You now can develop an ecosystem of companies that address many of these questions. So whether it is managing and regulating the privacy of data that goes into training data sets, privacy-preserving training pipelines, that’s the thing. That’s a legitimate business, very high value. People should do that. My company can do that in one of its iterations, things like that. There’s also people that can talk about access to who should be allowed to ask the model what type of question. Then there’s companies that can look at model attacks, a lot of poisoning. So now that we are understanding what this creature is, this ecosystem of builders and providers is actually probably going to be the answer to building these guardrails. Then you’re talking about a whole lot of companies that are using AI, like using OpenAI’s model and other models in order to use AI to solve problems. Those guys also have the capability to filter and process and put foundational prompts in that create guardrails, etc., so it can now move from this single area of responsibility to more of a shared responsibility.

Arti Raman: I think therein lies the answer. So I think this is all good news from this point on. These are also people that you can then regulate and companies, for example, I’m going a little bit deep here, but let’s say, I’m a large enterprise and I want to purchase and I want to procure products that are AI-enabled because it helps me build products faster and make my people more productive, and all of that benefit. Now, as a buyer, I can mandate, I can say, hey, my third-party risk assessment, which is traditionally what we do when we buy technology, it includes AI risk assessment. I need my vendors to conform to this, this, and this standard. Now you have a stick with some teeth on it. I don’t think sticks have teeth, but you know what I mean.

Kevin Rosenquist: I get it, I get it. So the analogy works.

Arti Raman: So now you have a real way that you can arrest that runaway train because money is the ultimate motivator. So these companies will then build and put those guardrails in. So I think there’s hope in this way.

Kevin Rosenquist: You have a very positive outlook on it, which is great because there are a lot of doomsday people out there, whether they’re worried about Terminator two scenarios or whether they’re worried about things like more job loss and things like that. Is there any larger aspect of AI that has you concerned?

Arti Raman: No, I’m positive about AI generally. I think we have a ton of work to do to make sure we use it safely, but I think the benefits outweigh the risks. Frankly, I think human beings wield a lot of powerful technologies that can destroy us all and end the world and that it already exists. So I think we don’t need to be especially gloomy about AI. But what I do think is that because fundamentally, this is technology, we have at least as yet the power as humans to put in the right controls. We should continue to do that. But I think there are so many benefits if you think about it, like health and medicine and we don’t even need to make that list. It’s amazing.

Kevin Rosenquist: Every aspect of life.

Arti Raman: For sure.

Kevin Rosenquist: So enter Portal26, the company you founded in 2019, which was well before the general public even knew what AI was outside of movies and entertainment and books and stuff. So you guys offer a visibility, security, and governance platform to help companies accelerate generative, generative AI securely and responsibly. The tagline on your site is “Remove the GenAI blindfold.” What do you mean by that?

Arti Raman: Since you brought up that we were founded a while ago, I can sort of say five words on that. So when we started, we were all about keeping training data and analytics data clean. This was before you had the generative aspect of AI out there. So we’ve been doing that for a while. So make sure that if you do AI and ML, you’re you can you can use that data without exposing it. That’s what we did, data security, and data privacy. That’s what we did from 2019 to 2020 middle of 2023. Then this journey of AI world came out and we realized that not only is there a place to make sure training data is clean, but there’s a lot of value to helping companies understand what’s even going on with AI in their company. So that product has been post the ChatGPT era, and that product is proving to be incredibly valuable. So I’ll give you an example. If you aspire to leverage AI, you can dream up how you think it might help you, but you may or may not be right, it may work, it may not work, your engineers may be more productive, they may not be, they may be spending the time drawing pictures of astronauts or horses. You have no idea what’s happening, so as a decision-maker and I didn’t make this up, a CIO told me this, astronauts and horses thing, this is exactly what he wanted to avoid. So, as a decision maker, don’t you want to know if you can be very, very well informed in your AI investments if you knew what people were doing with it?

Arti Raman: So this product does is it looks at all of your AI usage in the companies. It’s on top of the network, it looks, and it can tell you who’s using AI, and what tools are being used, and it uncovers shadow AI. You didn’t realize that you were your people were using 235 AI tools. You didn’t even realize there weren’t 235 AI tools out there, So that what are they putting inside prompts? Is there a security risk? Are they uploading sensitive data into ChatGPT? You want to know that. Are they taking too long? Is it taking them 23 attempts to get the AI to do what they want? They need to be educated, or hey, you have a security system that isn’t catching these data leaks. You need to update that or we’re spending X million dollars with AI providers and people are spending that million drawing pictures. There’s so much, security, privacy, education policy, decision-making, and financial optimization, all of this can happen much better if you know what’s happening. Hence the blindfold on our website. So this is what we do. We’re an adoption platform. We give you tons of visibility and security and then a whole lot of actions you can take based on that. Of course, we’re AI-powered as well. So we use AI to govern AI. Cool.

Kevin Rosenquist: Sure. Makes sense to me. What were you seeing in 2019 that made you go, this is something that society needs, the business world needs?

Arti Raman: Data breaches. It was data breaches. There were so many data breaches out there. At the time, we didn’t have an AI tagline. We were analytics is to us AI or ML happens to be a place where you do a ton of analytics on steroids and have a lot of data together in one place. But it was data breach-driven. There was breach after breach after breach, and we were like, there must be a way to make this data usable without having so much exposure. So we started injecting encryption into these pipelines and things like that, and worked out how we can make sure the data is usable as well as safe. That’s where we started. Then when I came along in this fashion journey of AI, we were like, oh my God, it’s all the more important. It’s more important than it ever was before that both the data use case, as well as the analytics use cases, are combined together.

Kevin Rosenquist: Your product, your system, it integrates with whatever people already use. Correct?

Arti Raman: That is correct. Thank you for asking that. So we don’t deploy new stuff. So if they have an existing secure web gateway like Zscaler or Netskope or Cisco or Palo Alto Networks, they’re already trying to monitor what’s happening. They aren’t equipped to do that very well. For AI, it looks different, the data streams look different, and their conversations, context matter, and intent matter. So you have to be much richer than monitoring network traffic like you did before. So we bring that extra layer of intelligence on top of what those guys already do. Then that means they can turn it on at like 30 minutes. Now I’m pitching the product, but you see what is AI. It’s very powerful.

Kevin Rosenquist: That’s cool. Because people kind of often fear change, especially, in the business world, as you undoubtedly know. Trying to get them to switch systems is not always easy. But if you can offer a product that is added on as an add-on to what they already use, then that’s a lot more palatable. I feel like.

Arti Raman: Yes, that is that was our strategy to begin with, because I because I do feel like the larger the organization, the bigger this problem. So you kind of look at it as a constraint optimization problem. So where would this product be most useful? Well, if you have a five-person shop, you want to talk to each other and make sure you don’t do anything bad, you’ll be okay. If you have a 50,000-person company, it’s probably going to be harder, and from a statistics, a numbers standpoint, you probably have a few people that aren’t happy and maybe not malicious, but maybe disgruntled or something.

Kevin Rosenquist: Or careless.

Arti Raman: It’s a numbers thing. Inadvertent mistakes. So the larger the organization, the bigger the problem, but the larger the organization, the more challenging it is to roll out new endpoint agents or deploy some new technology, or get like a multinational with all of its divisions and departments, even agree on how you’re going to monitor AI. There are tons of AI Council’s ethics committees, all this stuff that I’m running into as well. But the minute we go on top of something that they’ve already had, they’ve already been blessed, they’ve been through everything, it’s so easy.

Kevin Rosenquist: I can see that for sure. So we talk mostly about fintech on this podcast, but we go all over the place. I’m curious. Fintech companies are heavily regulated. That goes without saying. AI is not super heavily regulated, at least not as of yet. What do you find challenging when you’re trying to work with fintech companies? How do you stay within all the regulations involved with a fintech company while still deploying what you guys do, and what you provide?

Arti Raman: No, that’s a fantastic question. So we are and we might be the only game in town, including the large vendors as well, that supports all of the audit compliance and regulations that that the context needs. So for example, as a fintech, you have to be able to prove do care, you have to be able to prove that you in fact put the right controls in place, etc. Then no way to prove that with AI right now, without us, how are you going to prove that? The fraud algorithms that you were calling didn’t have anything that it wasn’t supposed to have in there. Without us, you can’t even look back 30 days and make a representation, because that’s not how AI works. You talk to the model, the model ingests everything, gets smarter, and sends you back. There’s no observability, there’s no logging, there’s no forensics. There’s nothing. So when you put us in, we become that observability layer for AI usage. So our very first set of customers were fintechs. They specifically said Thank God. Now we can allow people to use AI because without being able to audit and represent regulators, we could not do that. So that’s part of the benefit because we do like all the history and everything. So we do that now on the models themselves, that’s not us. So I’m sure that the fintech when they build or utilize LMS or put private wrappers around, public models, etc. have to have a whole set of scorecards that happens on model fairness and model transparency and model ethics. But that’s not us. That’s not my company. But that’s very, very valid as well. Does that make sense?

Kevin Rosenquist: Yes, it makes sense. I think that it sounds like you guys are very well needed, very much needed in the fintech space.

Arti Raman: For sure. We like to think that no matter what happens with AI. There are certain companies that are more successful than others or there’s a combination of models public, and private. No matter what happens, we think that the monitoring and the visibility and the reporting and the audit and all of that is going to be needed regardless. So we’re keeping our fingers crossed that this is either which way it’s a winner.

Kevin Rosenquist: How can companies sort of prepare for upcoming AI regulations? Can they prepare for upcoming AI regulations? It seems like people have some ideas but are not sure what’s going to happen, mainly from what I can tell, because so many of the people, the people that will be putting in these regulations don’t have a lot of understanding of AI. So what do you see? What do you see coming as far as regulations and how can companies be prepared?

Arti Raman: So I think that the regulations will come both on the model and the data side. So I think there are three things. There are model regulations, data regulations, and disclosures around usage and incidents. So I think it threes. It served me well. So these are the three. So model regulation is going to be about water-based model transparency. I think there’s already movement there. The model builders, they are moving to the point where you can create a scorecard and you can say, yes this is kind of how we work. There’s tons of pressure. So I think that’s going to pay off. Otherwise, adoption will slow down. I think they need that. So that’s going to happen. On the data side, I think we are also going to see. So I think the EU is already doing that. I think there’s also seeing and hearing something about database regulation. So what kind of data can you use to train a model? There are also mandating some sort of, I want to say segmentation or access, like who can ask the model to use what type of data type of thing. So because a model was trained on something confidential, you shouldn’t be able to have somebody that wasn’t clear access to that data, be able to ask a model that question. So I know there’s a ton of work happening on like the combination between data training, data used, and then model access. I know there’s a lot of funding going in there as well. So I think that’s going to happen. I would say the next 12 to 18 months, you’re going to see some answers for that as well. Also, in the second bucket, I think you’re going to see smaller models. Right now you have models that are trained on everything to answer everything. Those are the hardest to regulate.

Kevin Rosenquist: For sure.

Arti Raman: You can’t drop boxes. Whereas if you get narrow with models and you say, okay, this thing is about financial fraud. So I’m making up something. Now, you can have a very good sense of how to draw those boxes and how to kind of mandate. When you know that something is out of on the other side of the line and things like that. So and that’s happening and then again, now meandering, but in this same box, is this notion of smaller, more energy-efficient models as well. Because there’s that problem. So if you’re able to make a model narrow and you’re able to make it focused, you’re also able to make it faster. You also are able to make it run on the edge somewhere. So that movement is happening. I think that’s going to also address as a byproduct, as a side effect, some of these other regulatory needs as well. Then finally, on the usage and disclosure side, I think that’s where we play. I think that is coming so fast. In fact, you don’t even need regulation for it. Like this whole idea that you have to disclose, if you have an incident, then you have to report that incident and you would have seen. Obviously, there are some new rules that are saying AI-based disclosures, and AI-based breach disclosures right there, you have a product like ours. By mandating the disclosure and the reporting, you already need, You don’t need the ability to go back and look and report and all that. So I think 3 is already almost there. Then 1 and 2 are coming if you remember what those three are.

Kevin Rosenquist: I do. I think it’s interesting what you said. The more niche kind of models like picking instead of being everything, you can have something that’s based on one specific thing. That’s got to be the future. That’s got to be where it’s going to go. It’s going to be more like, you sign up for whatever healthcare GPT, as you said, fintech fraud GPT baseball GPT, whatever you want. Was that kind of where you think we’re going to go?

Arti Raman: I do think so. I think that there’s going to be two threads. The one that is going to be most used by businesses and to solve specific productivity problems etc., is going to be smaller models and maybe chains of smaller models. You break down a question into little questions that are more specialized, and you give each one to a different model and you can string it back together. So model chaining. So I think that’s where you’re going to get the most efficiency, etc. I do think that the big model is not going to go away. Why? Because humans are fascinated by creating artificial intelligence. That’s sort of a clone of themselves. So you will always have the people that want this model that’s like a human that can answer everything and do everything.

Kevin Rosenquist: Smartest human alive.

Arti Raman: So I don’t think it’s going away, but I don’t think that’s where you see the most ROI. So that’s where educating the general public is going to make sense. You don’t want senior citizens and students. Talking to these all-knowing models without being able to differentiate fact from fiction. You don’t want to take directions from this thing that maybe they improve it over time. I think those are two of the all-knowing models that are going to remain a fascination for people, but I think businesses are going to go more towards the other. We’re already seeing that, as we can see the breadth of AI utilization in an enterprise. We actually can’t see their data. So this is all metadata, Looking at them, them telling us. We’re seeing a lot of people that start with the copilot, opus copilot, and realize that it’s a nice general-purpose thing, but I don’t know if I should say this on the podcast, but there are specialized models that are better at specialized tasks than the general thing that comes with an office.

Kevin Rosenquist: So we’re not sponsored by Microsoft. You’re all good.

Arti Raman: Okay. So very quickly they hop out of that copilot world and hop into a specialized model. You know how viral our world is now. We’re all connected to each other. So the minute you know that, hey, this one’s good for that. Pretty much before you know it, everyone’s using that. So I think we’re only seeing some of that.

Kevin Rosenquist: It’s exciting. It’ll be fun to watch. It’ll be fun to see where it goes. For sure. It’s going to be crazy. One of the key features of Portal26 is the ability to detect bias. What is your system looking for to determine if a bias is present?

Arti Raman: No, we don’t do bias detection in the traditional way of model bias. We don’t do that. But what we do is we do intent. So from a user standpoint, we characterize intent. So we use AI to scan usage. We say what is this person trying to do? Why do we do that? We do that because that is like gold That information. So for example, the first thing we do is say is this person trying to do something private or is it for business if it’s private, like I’m researching blood pressure medication, I have a problem with whatever, We leave it alone. We use all of our encryption firepower that we have in our background, and we cord lopping off so nobody can see it. Because of ethics, privacy is a thing. If it’s business, we start characterizing intent. That tells us what type of problem are they trying to solve. We can even tell if it’s malicious or not. Then is there something at the department level or the org level that’s meaningful? Is there a project that’s being supported? Are there other people, other parts of the company that are trying to solve the same problem? Do they need to be got together? Are there any synergies to be had efficiencies to be had, or things like that? We do intend which is not the same as model bias.

Kevin Rosenquist: I wasn’t misunderstood when I was looking at your site. I thought that you guys worked with detecting bias and stuff, which is a problem with AI models. Ideally, AI is going to help reduce bias and things like credit and hiring decisions and things like that. But it’s still.

Arti Raman: That’s not us but it’s very important.

Kevin Rosenquist: Got it, got it. Yes, it is important. I agree, I agree.

Arti Raman: I’d love to own the world. But then if somebody hears us like define what we do. So that’s why.

Kevin Rosenquist: No, I appreciate that you’re clarifying. I misunderstood something there on the site. So I’m glad you clarified that. So I didn’t confuse anybody. I’m impressed, Arti. You’re a heavily decorated business professional. Let’s see. I’m gonna go through some things here. Top Ten Women CEOs Shaping the Future of Tech 2024, Global Data Power Women 2023. 2023 Titan female CEO of the year, 2023 Titan Female Entrepreneur of the year, 2022 Tech trailblazer of the year. I don’t even think that’s all of them. What are these accolades mean to you, especially, being a woman in a largely male-dominated industry of the tech world?

Arti Raman: It takes a bit of work to allow your name to get out there like this. You can hide or you can be visible. I think that unfortunately there are still tons and tons of biases against women. I have met so many wonderful people. What’s that?

Kevin Rosenquist: I said, speaking of bias.

Arti Raman: I know. I met so many wonderful people who are great human beings. Sometimes it’s unconscious Sometimes it’s systemic Sometimes it’s circumstantial. It’s not always people being bad to you or doing bad things. So I thought at minimum can at least like try to inspire someone or be a resource or put your name out there so people call you and you can help them or something. So this is like having my name out there, accepting some of these things, showing up, and talking and writing is trying to give back. For men, it feels good to have that. But more importantly, it’s about that. I have a daughter. She’s 18. So I hope she’s as fearless and bold and risk-taking. So it’s about those things. I also think that the statistics about girls and women dropping out of STEM as early as middle school are depressing because I think women think differently and you’re going to laugh. But even in my boardroom, I find that I don’t think in a way that will think traditionally and it creates these moments where I have to make room for how I think and help other people make room for how I think, because women are wired differently and not in a bad way. So I think for those reasons, I’m rambling a bit. But that’s the idea to be there and be there for people. So ask me some other questions. Those that ramble away on that.

Kevin Rosenquist: No, you’re good. I’ve talked I mean I talked to several women on this show before about their efforts to support and encourage other women in business, and it’s always good to see people who are as successful as you are. Some of these other women are able to be inspirational for the younger generation. Do you see progress as far as opportunities for women and pay equity? Are we moving in the right direction?

Arti Raman: I don’t think so. I think we have to keep trying. I also think that “Actions, not words.” It’s very easy to offer people words. Words of encouragement, words of support, words of counsel, mentoring. So I feel like it should be like jobs, projects, opportunities. letters. Open a door, make a phone call, and give people some sort of an unfair chance. So those are the sorts of things I think we need to do when we try to help women. I almost feel like people talk about equity. I feel like no one has to be the other way for a while. We have to unfairly bring people in. Because there’s so much pent-up, I don’t know, obstacles. There are so many obstacles that pile up on top of each other that you have to go out of your way. This is going to sound weird. I’ve stopped trying to be fair about the whole thing. I try to be unfair and create these opportunities for anybody that I can find.

Kevin Rosenquist: So I think you earned it. You women earned it. So it’s sort of like you can look we go off on a tangent, but you could look at things like affirmative action. You can think of all sorts of things. You had to take the track. You have to take drastic action in order to see a large systemic change like this. So I get what you mean. It has to go unfair in the other direction before it can meet in the middle, and be equitable for everybody.

Arti Raman: At least on the micro level. I’m not advocating for unfair laws or rules. One to one. When you’re helping someone, there’s no need to be like super fair about it. Go out of your way, go out of your way, take a person, and then you create this ripple effect. You go out of your way to do something generous or kind or open, create an opportunity. Then maybe they’ll do the same and you’ll get some kind of network effect. But I think we have a lot of work to do, like, oh my gosh, so much work to do.

Kevin Rosenquist: It’s interesting because I think a lot of people not in the business world realize that. They don’t read the stuff, oh, women make. I don’t know what it is now, 80-something cents to the dollar. Maybe it’s last. I don’t remember what it is now, but it’s less and they see that but then they’re like, oh, that’s terrible. But then they don’t think about it. When you’re in the business world like you are, you see it so much more. You see it firsthand and it’s interesting. I feel like this topic comes up a lot on this show when I talk to women, and it’s cool and deeply sad at the same time. It’s cool that you’re that women are trying to help each other and make it a good place for women to grow in business. It’s also sad that we’re still having this conversation.

Arti Raman: I agree with you. I think one of the things I realized is that for most people, it’s very fundamental. It’s about being able to do what you want to do, that’s what it’s about, Being able to do what you want to do without like there are some expectations that are unsaid. and if you wish to progress in business or study or I don’t know, write some algorithms or get funding from A, B, C, or do what you want to do. So that is much more than equal pay. An equal opportunity means a mindset change. That’s why I feel like it’s so difficult because if you stay at a superficial level, you can maybe try to try to address it in bits, but it’s more about how your family thinks about women, how your teachers think about women, how your business associates. I remember I was I was invited to give a talk to, a company that’s headquartered in Japan. I got a call from someone that said, so you know culturally, maybe you want to take a male colleague along. I respect the culture. Absolutely. I come from a different culture and we have our thing. But it made me pause. I was like, oh, that’s interesting. I’m not down on anybody or anything. I was very well treated. So this was not a problem. But it did come up in a prep in prep call and I was like, huh, that’s interesting. That is interesting.

Kevin Rosenquist: That’s off-putting for when you’re used to being here. That’s interesting.

Arti Raman: It’s interesting, but I must say it never came to pass. I didn’t take a male colleague and it never came to pass. But we are thinking about it. That means we have some work to do etc.

Kevin Rosenquist: I try not to generalize, but like Generation Z, you’re in your daughter’s age and stuff like, I feel like there’s a difference in mindset. Whether it’s gender, race, sexual orientation, or whatever it feels like, they view it differently, that they have a more say, accepting, but more of a like, we’re all the same. It doesn’t matter. You love who you love, pay everybody the same, and all that. So I think there’s positive vibes coming from that. It feels like they’re ready for change.

Arti Raman: I have to agree with you. I have so much respect for that generation, for my daughter and her generation. I see them as like their own people. They’re their own value system. They kicked everything that didn’t work, at least the folks around that I see. It’s beyond beyond like race, sexuality, and gender. It’s more than that. They’re retro when they want to be. They’re modern when they want to be. Their fashion is their own. It is incredible. I’m so proud of that generation. I can’t say I’m annoying, too.

Kevin Rosenquist: Well, we’re old compared to them.

Arti Raman: No, I feel like it’s totally in the right direction. It is unbelievably awesome. I like that.

Kevin Rosenquist: I like it, too. I’m very bullish on them to save the world.

Arti Raman: Exactly. So when my daughter who was very good at math etc., said, no, I want to do something cross-functional like law, economics. So she was like, if I’m going to make a change in the world, I have to not be a nerd, I have to do the other stuff. So I’m thinking that at the time, coming from a 16-year-old, I was like, that’s pretty interesting. She wants to make a difference. She wants to put herself out there. I don’t think I had that when I was 16. I wanted to come to America. That’s what I wanted opportunities for me.

Kevin Rosenquist: Well, it worked. Now your daughter has awesome opportunities. So it worked out pretty well.

Arti Raman: I hope so.

Kevin Rosenquist: Well, it’s been great talking with you. Thank you so much for being here. I appreciate your insight and good luck to you with everything, and with Portal26. It sounds like an awesome thing that is very much needed right now.

Arti Raman: Thank you so much for the opportunity, for the topics, and also for the general conversation. It’s such a pleasure to talk to you.

Kevin Rosenquist: Thank you. Thanks, Arti.