A client engaged us to investigate a tip that their website might be exposed. The site looked fine. Customers booked, paid, received invoices, and nothing was visibly broken. We found the exposure within an hour. The supplier who'd built and maintained the site for four years took 30 hours and still hadn't produced any updates, didn't have a working backup, and asked us to send them the publicly leaked copy of their own client's source code so they could keep working.
This is a supplier-risk story.
The exposure
The agency had pushed code to production via a git push directly into the web root and left the resulting .git directory at the root of the live site publicly accessible. Anyone could clone the entire codebase by running git clone https://<the-domain>/.git. Source, configuration, full commit history. Nineteen months sitting open.
Alongside the application logic, the agency had committed live credentials. Production payment processor publishable and secret keys. API keys for the mail platform. Accounting integration client ID and secret. AWS access key and secret. The database password in plain text. The application's encryption key.
The site was taking customer credit cards at checkout, encrypting them with that leaked encryption key, and storing them in the database. Card number, cardholder name, expiry, and CVC. Storing CVC at all is a flat PCI DSS Requirement 3.2.2 violation. Storing the PAN encrypted with a key sitting in a public repository is Requirement 3.4 in name only.
Anyone who pulled the .git directory had the key. Anyone who used the leaked database password had the encrypted card data. Combined: cardholder name, full PAN, expiry, and CVC in plaintext, for every customer who'd ever paid through the site. Under the Privacy Act's Notifiable Data Breaches scheme, that's the kind of exposure where the threshold for “likely to result in serious harm” stops being a discussion.
Evidence of exploitation
The web server's access logs covered two months. The earlier logs had rotated off. Within just those two months we counted 30,724 successful fetches of git pack objects from the .git directory. Multiple distinct source IPs, patterns matching known reconnaissance tooling. Active automated harvesting at scale, for at least the two-month window the logs covered.
Competent attackers don't break things. They use credentials quietly. They sell card data into the relevant marketplace and let the next person use it. The lack of visible fraud means nothing about the exposure.
The supplier's response
We delivered an initial audit to the client earlier in the week with four specific remediation steps for the agency to complete by close of business Friday: take the .git directory off the public web, rotate every leaked credential, remove stored card data, confirm a working backup.
By 7am Friday, instead of completing those steps, the agency had taken the live site offline. The public URL was returning a 403 from NGINX. As far as anyone outside the agency could tell, the site was lost.
We spent that day completing the full forensic report: the plaintext credit card exposure, the database column-level findings, the 30,724 hostile fetches, the cross-tenant blast radius. We delivered it to the client during Friday afternoon.
By 11pm Friday, the site was still down. No update, no progress, no proposed restoration timeline. The agency said they'd be back at it Saturday morning.
In the middle of that, they asked the client whether we could send them a copy of the codebase. They didn't have a valid backup of their own client's site.
The supplier had been maintaining a production website handling customer credit card data for four years, and the only available recovery copy was the publicly leaked one. ISO 27001's Annex A.8.13 and the Essential 8's Regular Backups mitigation both exist to prevent exactly this. Neither was being met.
At that point the client made the call. Wait for the people who'd caused the exposure to fix it without a backup, or have us take initiative.
What we did
The situation didn't allow waiting for a referral. We took the initiative.
We pulled card data out of scope entirely. There was no business reason for the site to be handling cards in the first place. We replaced the gateway with an invoice-on-submission flow through the client's accounting platform, dropped the card columns from the database, and PCI DSS scope was gone.
We rotated every credential under the client's own accounts. Every API key, every secret, every password, the encryption key. The supplier had no residual access to anything.
We had the site back online before the original agency had started. By Monday, the client owned every credential, every account, and every integration. The supplier was no longer in the picture.
Full technical detail available on request.
The questions your auditor will ask
If you operate a business that gets asked vendor security questionnaires, tender assessments, or supplier reviews, the scenario above is exactly what those questions are trying to prevent. Government supply chains, healthcare, finance, legal, regulated sectors generally: the questions are coming.
- What third parties have access to your production systems? If your developer disappeared tomorrow, what gets rotated, by whom, on what timeline?
- How do you know your supplier isn't leaking your data? Append
/.git/HEADto your domain. Anything other than a 404 means your codebase is downloadable. - Where do credentials live? Anywhere in a code repository, including old commits, means exposed.
- Are your suppliers keeping working, off-host backups recent enough to actually restore from? Have you tested a restore?
If the answer to any of these is “I don't know” or “I'd have to ask the developer”, that's the gap. That's what an ISO 27001 auditor flags. That's what a tender security questionnaire is trying to surface.