Back to Blog

After Lovable and Vercel: The Checklist for Evaluating Any PaaS's Data Practices

The April 2026 breaches at Lovable and Vercel changed the conversation around platform security. For years, engineering teams accepted a tradeoff: hand over infrastructure management to a platform in exchange for developer experience. The implicit assumption was that these platforms would handle security at least as well as you could yourself. That assumption no longer holds.

Both incidents exposed customer data due to weaknesses in how the platforms managed multi-tenant infrastructure. The details differed, but the pattern was the same: customers had limited visibility into what happened, restricted ability to investigate independently, and no control over the remediation timeline. Teams that had chosen these platforms specifically to avoid infrastructure complexity found themselves in a worse position than if they had managed their own cloud accounts.

If you are evaluating PaaS options today, whether migrating from Heroku, reconsidering your current platform, or simply more skeptical after reading the breach reports, you need a framework for asking the right questions. This checklist covers the four categories that matter most: data residency, access controls, audit trails, and breach response. These are not theoretical concerns. They are the exact areas where the April 2026 incidents created real damage for affected customers.

Why These Questions Matter More Now

Before April 2026, platform security discussions often focused on uptime SLAs and compliance certifications. SOC 2 badges and ISO 27001 compliance were treated as sufficient evidence that a platform took security seriously. The Lovable and Vercel breaches demonstrated that certifications do not prevent architectural vulnerabilities.

The core issue in both cases was not a failure of operational security practices. It was a structural problem with how customer data was isolated (or not isolated) at the infrastructure level. When customers share compute resources, network paths, and storage systems, a vulnerability in one area can cascade. Compliance frameworks audit processes and policies. They do not audit whether your data can be accessed through a lateral movement attack that exploits shared infrastructure.

This matters for three reasons:

First, regulatory exposure has increased. GDPR, CCPA, and sector-specific regulations hold data controllers responsible regardless of where data is processed. If your platform vendor experiences a breach, you still have notification obligations, potential fines, and liability to your own customers. The April 2026 incidents triggered regulatory inquiries in multiple jurisdictions, and affected customers had to navigate those inquiries with limited information about what actually happened to their data.

Second, customer trust is harder to rebuild. When you tell customers that their data is hosted on a third-party platform, they assume you vetted that platform. If that platform experiences a breach, your customers do not distinguish between "our vendor had a breach" and "you had a breach." The reputational damage lands on you either way.

Third, investigation capability matters. In both April 2026 incidents, affected customers had to wait for the platform vendor to complete their investigation before understanding the scope of exposure. Customers with compliance obligations faced an impossible situation: regulators wanted answers, but the platform controlled all the logs. If you cannot investigate independently, you are at the mercy of the vendor's timeline and transparency.

Category 1: Data Residency Questions

Data residency goes beyond simply choosing a region from a dropdown. The questions that matter concern where your data physically lives, who else shares that infrastructure, and whether you can verify the answers independently.

Can you specify the exact region and availability zones? Some platforms offer region selection but deploy resources across availability zones you do not control. If you have data sovereignty requirements, you need to know not just the region but the specific zones where your data will reside. Ask whether the platform provisions resources in a single zone, multiple zones, or leaves this unspecified.

Can you verify that data stays in your cloud account? This is the fundamental question. If the platform provisions resources in their account and you access them through an API, your data lives in their infrastructure. If the platform provisions resources in your account, you can verify through your cloud provider's console exactly where data resides. The difference is not just theoretical. It determines who has root access, who controls encryption keys, and who sees the raw infrastructure.

Is there customer data commingling? Multi-tenant platforms share infrastructure across customers to achieve economies of scale. This sharing can occur at multiple layers: compute nodes, database clusters, network segments, storage backends. Ask specifically about each layer. "We use separate containers" is not the same as "we provision separate VPCs." The April 2026 breaches both involved scenarios where tenant isolation at the application layer did not prevent access through shared infrastructure layers.

Who controls the encryption keys? Data at rest encryption is standard, but key management varies widely. Some platforms use shared keys across customers. Some use per-customer keys but manage them on your behalf. Some provision keys in your cloud account's KMS with policies that restrict even the platform's access. The latter option means that even if the platform is compromised, your data remains encrypted with keys the attacker cannot access.

Can you bring your own VPC? The ability to deploy into an existing VPC that you control is a strong indicator of data residency architecture. If a platform supports this, it means they have designed their provisioning to work within your network boundaries rather than requiring you to trust their network.

Category 2: Access Control Questions

Access control questions determine who can touch your infrastructure and data, both at the platform vendor and within your own organization.

What IAM boundaries exist between the platform and your resources? When a platform provisions resources, it needs some level of access to manage them. The question is how that access is scoped. Does the platform use a role with administrative access to your entire AWS account, or is it restricted to specific resources? Can you audit the policy attached to that role? Can you further restrict it after installation?

Who at the vendor can access your production infrastructure? Ask for specifics. How many employees have access? What roles have it? Is access logged? Is there a process for reviewing and revoking access? Some platforms grant broad infrastructure access to support engineers. Others restrict production access to a small operations team with audit trails. The answer tells you about the blast radius if a platform employee's credentials are compromised.

Can you revoke platform access without destroying your infrastructure? This question tests whether the platform has designed for customer control. If revoking access breaks your deployment, you are locked in. If you can revoke access and continue operating (even if you lose platform features), you have genuine control over your infrastructure.

What network access does the platform have? Some platforms require inbound network access to your VPC. Others only require outbound access from an agent running in your cluster. The direction and scope of network access matters for both security posture and incident investigation. If the platform can initiate connections into your network, that is a potential attack vector.

How are secrets managed? Environment variables, API keys, and database credentials need to live somewhere. Ask whether secrets are stored in the platform's infrastructure or in your cloud account's secrets manager. If they are stored by the platform, ask about encryption, access controls, and whether platform employees can view them.

Category 3: Audit Trail Questions

Audit trails determine your ability to understand what happened during normal operations and, critically, during security incidents.

Do you own CloudTrail or the equivalent? If infrastructure is provisioned in your cloud account, the cloud provider's audit log (CloudTrail for AWS, Cloud Audit Logs for GCP, Activity Log for Azure) captures every API call. You own this log, you control retention, and you can query it without asking anyone's permission. If infrastructure is in the platform's account, you are dependent on the platform to provide audit data.

Can you access VPC flow logs? Flow logs capture network traffic metadata within your VPC. During an incident, they help you understand what systems communicated with what, when, and how much data moved. If you do not control the VPC, you cannot enable or access flow logs. This was a specific gap in the April 2026 investigations: affected customers could not determine whether data had been exfiltrated because they had no network visibility.

What is the log retention policy? Compliance frameworks often require specific retention periods. More importantly, investigations sometimes begin weeks or months after an incident. If logs are only retained for 7 days, you cannot investigate something discovered on day 8. Ask about retention for application logs, infrastructure logs, and audit logs. Ask whether you control the retention policy or whether it is fixed by the platform.

Can you ship logs to your own systems? The ability to forward logs to your SIEM, log aggregator, or compliance archive means you are not dependent on the platform for log access. It also means you can correlate platform logs with logs from other systems in your environment. If the platform only offers log viewing through their console, you are accepting a single point of failure for your visibility.

Are platform actions logged separately from your actions? When investigating an incident, you need to distinguish between changes made by your team and changes made by the platform (either automatically or through support). If these are commingled, you cannot determine the root cause of unexpected changes.

Category 4: Breach Response Questions

These questions determine what happens when something goes wrong. They are uncomfortable to ask during a sales process, but they matter most when you need them.

What is the breach notification timeline? GDPR requires notification within 72 hours. Many US state laws have similar requirements. Ask what the platform commits to. "As soon as reasonably practicable" is not an answer. Get a specific hour count. Then ask whether that timeline includes only confirmed breaches or also suspected breaches under investigation.

Can you investigate independently? If a breach occurs, can you access the logs and infrastructure to run your own investigation? Or must you wait for the platform to provide a report? Independent investigation capability is not just about control. It is about speed. Your incident response team can begin work immediately rather than waiting days or weeks for the platform's security team to complete their process.

What liability does the vendor accept? Read the terms of service. Most platforms limit liability to the fees you have paid. Some explicitly disclaim liability for data breaches. Understand what recourse you have and what losses you are absorbing yourself. This is not about expecting large payouts. It is about understanding the risk allocation in the relationship.

What is the communication plan during an incident? During the April 2026 breaches, affected customers complained about inconsistent communication. Some learned about the breach from press coverage before receiving official notification. Ask about the communication channels, frequency of updates, and who you will be communicating with. A dedicated security contact is better than a general support queue.

Is there a published incident response plan? Companies that take security seriously document their incident response process and make it available to customers. If a platform cannot share even a summary of their IR plan, that tells you something about their maturity.

How Convox Racks Answer Each Category

Convox Racks deploy entirely within your own cloud account. This architectural decision answers most of the questions above before they are asked. Rather than being a trust assertion, this is a technical fact you can verify by logging into your AWS, GCP, or Azure console and seeing the resources.

Data Residency: When you install a Convox Rack, you specify the region. You can also specify existing infrastructure using parameters like vpc_id, private_subnets_ids, and public_subnets_ids. Your data never leaves your account. There is no customer data commingling because each Rack is a separate Kubernetes cluster in a separate VPC in a separate cloud account. You can verify all of this through your cloud provider's console at any time. See the vpc_id and private_subnets_ids parameter documentation for configuration details.

Installing the Rack itself is straightforward. Create a free Convox account, add a runtime integration for AWS, GCP, Azure, or DigitalOcean from the Integrations page in the Console, then open the Racks page and click Install. From there you pick the region, choose a predefined Rack Template or customize the Rack Parameters yourself (including the VPC parameters above if you are bringing existing infrastructure), and click Install again to kick off provisioning. Rack creation typically takes 5 to 20 minutes and you can watch each resource appear in your own cloud provider console in parallel.

If you prefer the command line, the Convox CLI supports the same flow. A bring-your-own-VPC install looks like this:

convox rack install aws production \
  vpc_id=vpc-0123456789abcdef0 \
  private_subnets_ids=subnet-abc123,subnet-def456 \
  public_subnets_ids=subnet-jkl012,subnet-mno345 \
  cidr=10.2.0.0/16

Access Controls: The Convox control plane communicates with your Rack through an outbound connection initiated from within your cluster. Convox does not need inbound network access to your VPC. The IAM role created during installation can be audited, and its permissions are documented. You can further restrict permissions after installation. If you revoke access, your Kubernetes cluster continues running. You lose management features but retain your infrastructure. Secrets are stored in your cluster or your cloud provider's secrets manager, depending on your configuration. See the environment variables documentation for how secrets are handled.

Audit Trails: Because everything runs in your account, you own CloudTrail automatically. Every API call to AWS (or the equivalent for other providers) is logged in your audit trail. VPC flow logs are available if you enable them. Application logs go to CloudWatch in your account with configurable retention using the access_log_retention_in_days parameter. You can forward logs to any destination using the syslog parameter or by deploying your own log forwarder. The logging documentation covers all options.

Rack Parameters can be updated from the Rack's settings page in the Console at any time, or from the command line. From the CLI, enabling 90 day log retention and syslog forwarding looks like:

convox rack params set \
  access_log_retention_in_days=90 \
  syslog=tcp+tls://logs.example.com:6514

Breach Response: Because your infrastructure runs in your account, you can investigate immediately. If Convox experienced a breach, your data would not be exposed because your data is not in Convox's infrastructure. The blast radius of a Convox control plane compromise is limited to management operations. Your application data, logs, and network traffic remain in your cloud account. You can (and should) have your own incident response plan that does not depend on Convox's timeline.

This architecture is sometimes called Bring Your Own Cloud (BYOC). The key distinction from managed platforms is that BYOC treats the customer's cloud account as the source of truth for infrastructure. The platform provides orchestration and developer experience without requiring data to pass through or reside in the platform vendor's infrastructure.

Applying the Checklist to Any Platform

The questions in this checklist work for evaluating any platform, not just Convox. When you talk to a vendor, bring these categories explicitly. Ask for written documentation of their architecture. Request to see the IAM policies or service account permissions they require. Ask for a sample incident response runbook.

Watch for evasive answers. "We take security very seriously" is not an answer. "We are SOC 2 certified" is not an answer to questions about data residency. "We use industry-standard encryption" does not tell you who controls the keys.

The platforms that answer these questions well are the ones that have thought about them. They have documentation, architecture diagrams, and clear policies. They are willing to get specific because they have designed their systems with these concerns in mind.

The platforms that deflect or generalize are telling you something about their priorities. That is useful information too.

Get Started

If you are evaluating infrastructure options after the April 2026 incidents, Convox Racks provide a straightforward answer to the hardest questions on this checklist. Your data stays in your cloud account. You own the audit logs. You can investigate independently.

The Getting Started Guide walks through installation and your first deployment. You can create a free account and install a Rack in your own AWS, GCP, or Azure account to verify the architecture yourself.

For teams with specific compliance requirements or questions about how Convox fits your security model, reach out to our team to discuss your architecture.

Let your team focus on what matters.