Khash Kiani

Khash Kiani is the Head of Security, Trust, and IT at ASAPP, where he ensures the security and integrity of the company's AI products and global infrastructure, emphasizing trust and safety for enterprise customers in regulated industries. Previously, Khash served as CISO at Berkshire Hathaway's Business Wire, overseeing global security and audit functions for B2B SaaS offerings that supported nearly 50% of Fortune 500 companies. He also held key roles as Global Head of Cybersecurity at Juul Labs and Executive Director and Head of Product Security at Micro Focus and HPE Software.
Strengthening security in CX platforms through effective penetration testing
At ASAPP, maintaining robust security measures is more than just a priority; it's part of our operational ethos and is crucial for applications in the CX space. Security in CX platforms is crucial to safeguarding sensitive customer information and maintaining trust, which are foundational for positive customer interactions and satisfaction. As technology evolves, incorporating open-source solutions and a multi-player environment - with cloud offerings from one vendor, AI models from another, and orchestration from yet another - product security must adapt to address new vulnerabilities across all aspects of connectivity.
In addition to standard vulnerability assessments of our software and infrastructure, we perform regular penetration testing on our Generative AI product and messaging platform. These tests simulate adversarial attacks to identify vulnerabilities that may arise from design or implementation flaws.
All ASAPP products undergo these rigorous penetration tests to ensure product integrity and maintain the highest security standards.
This rigorous approach not only ensures that we stay ahead of modern cyber threats, but also maintains high standards of security and resilience throughout our systems, safeguarding both our clients and their customers as evidenced by our highly respected security certifications.
Collaborating with Industry Experts
To ensure thorough and effective penetration testing, we collaborate with leading cybersecurity firms such as Mandiant, Bishop Fox, and Atredis Partners. Each firm offers specialized expertise that contributes significantly to our testing processes and offers breadth of coverage in our pentests.
- Mandiant provides comprehensive insights into real-world attacks and exploitation methods
- Bishop Fox is known for its expertise in offensive security and innovative testing techniques
- Atredis Partners offers depth in application and AI security
Through these partnerships, we ensure a comprehensive examination of our infrastructure and applications for security & safety.
Objectives of Our Penetration Testing
The fundamental objective of our penetration testing is to proactively identify and remedy vulnerabilities before they can be exploited by malicious entities. By simulating realistic attack scenarios, we aim to uncover and address any potential weaknesses in our security posture, and fortify our infrastructure, platform, and applications against a wide spectrum of cyber threats, including novel AI risks. This proactive stance empowers us to safeguard our systems and customer data effectively.
Methodologies Employed in Penetration Testing
Our approach to penetration testing is thoughtfully designed to address a variety of security needs. We utilize a mix of standard methodologies tailored to different scenarios.
Black Box Testing replicates the experience of an external attacker with no prior knowledge of our systems, thus providing an outsider’s perspective. By employing techniques such as prompt injection, SQL injection, and vulnerability scanning, testers identify weaknesses that could be exploited by unauthorized entities.
In contrast, our White Box Testing offers an insider’s view. Testers have complete access to system architecture, code, and network configurations. This deep dive ensures our internal security measures are robust and comprehensive.
Grey Box Testing, our most common methodology, acts as a middle ground, combining external and internal insights. This method uses advanced vulnerability scanners alongside focused manual testing to scrutinize specific system areas, efficiently pinpointing vulnerabilities in our applications and AI systems. This promotes secure coding practices and speeds up the remediation process.
Our testing efforts are further complemented by a blend of manual and automated methodologies. Techniques like network and app scanning, exploitation attempts, and security configuration assessments are integral to our approach. These methods offer a nuanced understanding of potential vulnerabilities and their real-world implications.
Additionally, we maintain regular updates and collaborative discussions between our security team and partnered firms, ensuring that we align with the latest threat intelligence and vulnerability data. This adaptive and continuous approach allows us to stay ahead of emerging threats and systematically bolster our overall security posture against a broad range of threats.
Conclusion
Penetration testing is a critical element of our comprehensive security strategy at ASAPP. Though it isn't anything new in the security space, we believe it remains incredibly relevant and important. By engaging with leading cybersecurity experts, leveraging our in-house expertise, and applying advanced techniques, we ensure the resilience and security of our platform and products against evolving traditional and AI-specific cyber threats. Our commitment to robust security practices not only safeguards our clients' and their customers’ data but also enables us to deliver AI solutions with confidence. Through these efforts, we reinforce trust with our clients and auditors and remain committed to security excellence.
AI security and AI safety: Navigating the landscape for trustworthy generative AI
AI security & AI safety
In the rapidly evolving landscape of generative AI, the terms "security" and "safety" often crop up. While they might sound synonymous, they represent two distinct aspects of AI that demand attention for a comprehensive and trustworthy AI system. Let's dive into these concepts and explore how they shape the development and deployment of generative AI, using real-world examples from contact centers to make sense of these crucial elements. To start, here is a quick overview video on AI security and AI safety:
AI security: The shield against malicious threats
When we think about AI security, it's crucial to differentiate between novel AI-specific risks and security risks that are common across all types of applications, not just AI.
The reality is that over 90% of AI security efforts are dedicated to addressing critical basics and foundational security controls. These include data protection, encryption, data retention, PII redaction, authorization, and secure APIs. It’s important to understand that while novel AI-specific threats like prompt injection - where a malicious actor manipulates input to retrieve unauthorized data or inject system commands - do exist, they represent a smaller portion of the overall security landscape.
Let's consider a contact center chatbot powered by AI. A user might attempt to embed harmful scripts within their query, aiming to manipulate the AI into disclosing sensitive customer information, like social security numbers or credit card details. While this novel threat is significant, the primary defense lies in robust foundational security measures. These include input validation, strong data protection, employing encryption for sensitive information, and implementing strict authorization and data access controls.
Secure API access is another essential cornerstone. Ensuring that all API endpoints are authenticated and authorized prevents unauthorized access and data breaches. In addition to these basics, implementing multiple layers of defense helps mitigate novel threats. Input safety mechanisms can detect and block exploit attempts, preventing abuse like prompt leaks and code injections. Advanced Web Application Firewalls (WAFs) also play a vital role in defending against injection attacks, similar to defending against common application threats like SQL injection.
Continuous monitoring and logging of all interactions with the AI system is very important in detecting any suspicious activities. For example, an alert system can flag unusual API access patterns or data requests by an AI system, enabling rapid response to potential threats. Furthermore, a solid incident response plan is indispensable. It allows the security team to swiftly contain and mitigate the impact of any security events or breaches.
So while novel AI-specific risks do pose a threat, the lion’s share of AI security focuses on foundational security measures that are universal across all applications. By getting the basics right we build a robust shield around our AI systems, ensuring they remain resilient against both traditional and emerging threats.
AI safety: The guardrails for ethical and reliable AI
While AI security acts as a shield, AI safety functions like guardrails, ensuring the AI operates ethically and reliably. This involves measures to prevent unintended harm, ensure fairness, and adhere to ethical guidelines.
Imagine a scenario where an AI Agent in a contact center is tasked with prioritizing customer support tickets. Without proper safety measures, the AI could inadvertently favor tickets from specific types of customers, perhaps due to biased training data that inadvertently emphasizes certain demographics or issues. This could result in longer wait times and dissatisfaction for overlooked customers. To combat this, organizations should implement bias mitigation techniques, such as diverse training datasets. Regular audits and red teaming are essential to identify and rectify any inherent biases, promoting fair and just AI outputs. Establishing and adhering to ethical guidelines further ensures that the AI does not produce unfair or misleading prioritization decisions.
An important aspect of AI safety is addressing AI hallucinations, where the AI generates outputs that aren't grounded in reality or intended context. This can result in the AI fabricating information or providing incorrect responses. For instance, a customer service AI Agent might confidently present incorrect policy details if it isn't properly trained and grounded. Output safety layers and content filters play a crucial role here, monitoring outputs to catch and block any harmful or inappropriate content.
Implementing a human-in-the-loop process adds another layer of protection. Human operators can be called on to intervene when necessary, ensuring critical decisions are accurate and ethical. For example, contact center human agents can be the final step of authorization before performing a critical task, or providing additional insight when the AI system produces incorrect output or does not have enough information to support a user.
The intersection of security and safety
Though AI security and AI safety address different aspects of AI operation, they often overlap. A breach in AI security can lead to safety concerns if malicious actors manage to manipulate the AI's outputs. Conversely, inadequate safety measures can expose the system to security threats by allowing the AI to make incorrect or dangerous decisions.
Consider a scenario where a breach allows unauthorized access to the contact center’s AI system. The attackers could manipulate the AI to route calls improperly, causing delays and customer frustration. Conversely, if the AI's safety protocols are weak, it might inaccurately redirect emergency calls to non-critical queues, posing serious risks. Therefore, a balanced approach that addresses both security and safety is essential for developing a trustworthy generative AI solution.
Balanced approach for trustworthy AI
Understanding the distinction between AI security and AI safety is pivotal for building robust AI systems. Security measures protect the AI system from external threats, ensuring the integrity, confidentiality, and availability of data. Meanwhile, safety measures ensure that the AI operates ethically, producing accurate outputs.
By focusing on both security and safety, organizations can mitigate risks, enhance user trust, and responsibly unlock the full potential of generative AI. This dual focus ensures not only the operational integrity of AI systems but also their ethical and fair use, paving the road for a future where AI technologies are secure, reliable, and trustworthy.
Looking for an AI vendor you can trust? Not sure where to get started?
Watch our on-demand webinar to learn how