This is our commitment to HIPAA compliance page. Recognizing the critical importance of HIPAA compliance, our services adhere to the highest standards of regulatory conformity. We have implemented stringent measures to ensure that your sensitive information is handled with the utmost care and security.
As a fundamental stance, we strive to store as little information in our database as possible. When storage is necessary, we transform the data into high-dimensional vectors, making it difficult for third parties to interpret.
We want to be transparent about what our compliance with HIPAA specifically entails. As a note of transparency, while we are committed to full compliance, we recognize that there may be areas for improvement or correction. If you see something that you believe is not in alignment with HIPAA regulations, please don't hesitate to bring it to our attention.
Data Usage and Privacy
We want to assure you that our service, "Q, ChatGPT for Slack", does not store the content of user prompts or any Protected Health Information (PHI) that might be included in them. When a user interacts with our chatbot system, any PHI included in the prompt is not retained in our database or logs, ensuring there is no disclosure of such information.
As of March 1, 2023, our AI provider, OpenAI, has stated in its data usage policy that customer-submitted data sent through their API will not be used for the purpose of improving their models unless the customer opts to share their data for this purpose. Data sent through the API will be retained for a maximum of 30 days for the purpose of abuse and misuse monitoring, and will then be deleted unless legally required otherwise. You can find OpenAI's full data usage policies here.
We offer an additional feature where users can pre-scan PDF content that is then transformed into vectors within a thousand-dimensional space and stored in a vector database provided by Pinecone. Please note that the original text in these PDFs is not retained in a readable format, which greatly limits the potential for third parties to access or interpret this information.
Pinecone maintains robust data safeguards such as storing data in isolated containers, encrypting data at rest and in transit, and never using customer data for any reason other than servicing API calls. Furthermore, Pinecone strictly monitors operational metrics to support the system's health and performance while implementing strict role-based access control for service engineers. The specifics of Pinecone security measures can be found in detail here.
Our service operates entirely on Slack, which provides its own set of security measures such as two-factor authentication, managing apps with care, limiting access to the workspace, setting session duration, and more, as outlined in their Security tips to protect your workspace. These collective measures further enhance the security of our service.
Our AI model provider, OpenAI, is SOC 2 Type 2 compliant. This means that OpenAI has been independently audited and certified, which assures us of their rigorous data management and security practices. When data is sent through the API, it is always encrypted during transit, ensuring a robust security measure against potential breaches.
Additionally, our service transforms and stores any AI reference data obtained from PDF content into secure vector form, utilizing the security measures provided by Pinecone which is also SOC 2 Type 2 certified, like OpenAI. Pinecone's certification, audited by an external Big4 CPA firm (EY), covers Information Security, Availability, and Confidentiality. Pinecone runs on fully managed and secure AWS infrastructure as a multi-tenant Kubernetes cluster, with customer data stored in isolated containers, encrypted both at rest and in transit. This double layer of SOC 2 Type 2 compliant security, combined with our own practice of transforming and storing any AI reference data obtained from PDF content into secure vector form, provides robust security assurance.
Administrative, Physical, and Technical Safeguards
Safeguards are an essential component of HIPAA compliance. They include a range of security measures that protect the confidentiality, integrity, and availability of electronic PHI (e-PHI). We follow strict protocols to ensure the data we handle is appropriately safeguarded. In line with this, our service does not store user prompts or any PHI included within them, greatly minimizing the risk associated with data storage and disclosure.
Moreover, we adhere to Slack's security practices as our service is deployed entirely on Slack. This means we do not maintain our own login IDs or passwords. Instead, we comply with Slack's rigorous security protocols which include two-factor authentication and data encryption. And, all members can install apps by default, but your workspace owners can choose to restrict permissions for how members can install and use apps. Also, only invite people you know and verify email domains. Deactivate members’ accounts who no longer need access. Use guest accounts and limit the channels they're invited to. These practices ensure only the right people have access to information in your workspace. For more details about these measures, please refer to Slack's Security tips to protect your workspace.
Our use of the OpenAI API and Pinecone vector database also provides robust safeguards. Both services are SOC 2 Type 2 compliant and maintain strong data encryption practices. Data is always encrypted in transit when sent through the OpenAI API, and Pinecone stores data in isolated, encrypted containers.
Risk analysis is the process of identifying and evaluating potential areas of risk in our systems that could negatively impact our operations and objectives. This can cover various risks like security threats, data breaches, and system failures.
In our case, as we do not store user prompts or any PHI included in them, the risk associated with data storage and disclosure is greatly minimized. Nevertheless, we continuously monitor the operational status and performance of our chatbot system to identify any anomalies that might indicate potential issues. This allows us to act quickly to address any identified risks and minimize potential impact.
Besides our continuous monitoring, our vector database, Pinecone, also carries out its own risk assessment process on an annual basis to identify, assess, and manage risks that could affect the operational integrity of its systems. They minimize identified risks through ongoing monitoring and risk assessment procedures built into their regular management and supervisory activities. This comprehensive risk management from both our side and Pinecone side adds to the overall system security.
OpenAI also has a limited number of authorized employees and specialized third-party contractors who can access the data solely to investigate and verify suspected abuse. These parties are subject to confidentiality and security obligations, reinforcing the protection against potential risks.
Violation Notification Procedures
A violation notification procedure refers to the steps taken in the event of a breach of security leading to unauthorized access, use, disclosure, disruption, modification, or destruction of information.
If a security incident were to occur on Slack, they follow a comprehensive incident response program that includes predefined processes and communication plans. They promptly identify, respond, and mitigate any potential impacts. In the event of a security breach, Slack would notify us, and we would take the necessary actions to respond appropriately.
In the unlikely event of a security breach at Pinecone side, they have an incident management policy that includes effective identification, repairs, investigation, prevention, and follow-up actions. Pinecone's incident management team will act and make decisions as necessary to appropriately respond to security incidents in accordance with applicable laws and regulations.
While we do not store PHI and thus the chance of a data breach is greatly reduced, we, in collaboration with Pinecone, are committed to dealing responsibly and effectively with any potential security incidents.
We train our team on essential data handling best practices, including the importance of privacy, the protection of sensitive information, and understanding how to recognize and report potential security concerns. While this training does not focus specifically on HIPAA requirements, it covers general principles applicable to the handling of sensitive data in a responsible and secure manner. This includes training on managing user accounts, passwords, media use, email and communication activities, and other relevant procedures.
In addition to this general training, our team also undergoes specific HIPAA training and testing, and we can provide certificates of completion upon request.
Business Associate Agreement (BAA)
Currently, we do not have a BAA in place with Slack, our application platform provider as they do not sign BAAs with any of their customers.
We do not have a BAA in place with OpenAI, our AI model provider. However, OpenAI has expressed the capability to sign Business Associate Agreements that align with HIPAA requirements, in support of customers' compliance in OpenAI's policy page. So, we are willing to sign a BAA with OpenAI. However, we cannot yet provide a definite timeline or confirm the feasibility of BAA with OpenAI, due to their private criteria and capacity limitations.
We do not have a BAA in place with our customers. However, we are willing to sign a BAA with our customers upon request. If you would like us to sign a BAA with you, please contact us.
In the case of our service, there are no procedures in place for patients to access or amend their PHI, as such information is not stored or processed beyond the immediate interaction. Consequently, the question of patient rights to access and amend their PHI is not applicable in this case.