The developer said he forgot that his secret keys were in the repository.
If you have your secret keys in your repository you've already fucked up, long before you accidentally make that repository public.
This is a most excellent place for technology news and articles.
The developer said he forgot that his secret keys were in the repository.
If you have your secret keys in your repository you've already fucked up, long before you accidentally make that repository public.
One of the first things you should do in a repo is add a .gitignore file and make sure there are rules to ignore things like *secret*
or *private*
etc. Also, I pretty much never use git add .
because I don't like the laziness of it and EVERY TIME one of my coworkers checked in secrets they were using that command.
Even though that's a good extra precaution, per person config data, such as keys, should be stored outside of the repo, eg. in the parent directory or better in the users home dir. There is zero reason to have it in the repo. Even if you use a VM/containers, you can add the config in an extra mount/share.
What's the general consensus on storing encrypted data in the repo with the keys outside? I see people recommend that but I'm too paranoid and my secrets are very small in size so it hasn't been necessary.
the format of the encrypted file can give the attackers an advantage. if your code reads the decrypted file, the attacker can guess the first line is a comment or the name of a setting. a savvy person can combine that with the algorithm to perform a "known plaintext attack", for example by generating a number of possible passwords that would lead to files starting like that.
That's smart. Anyone trying that should definitely have a machine-generated strong password!
That's not quite the definition of known plaintext attack (cryptography nerd here), that's bruteforce with a "crib" to use older terminology (known patterns which allows you to test candidate keys).
A known plaintext attack is defined as an attack on the algorithm to extract the key faster than bruteforce with analytical attacks.
I've seen that done for configuration management like Salt or Ansible. The repos for that were always hosted on internal Gitlab instances though.
I basically always do a git add -p
Very useful command and it works with other git commands as well.
Everytime a colleague asks me for help with git that’s the one rule I suggest them to use.
What does that do?
Instead of just adding whole changed files, it starts an interactive mode where it shows every hunk of diffs one by one, and asks you to input yes or no for each change. Very helpful for doing your own mini code review or sanity check before you even commit.
Better yet you can configure gitignore globally for git. I do this mostly to avoid polluting repo ignore files with my editor specific junk but *.key and similar can help prevent accidents.
I never understood why everyone uses it as a ignore list. In my own and work repositories I always exclude everything by default and re-add stuff explicitly. I have had enough random crap checked in in the past by coworkers. Granted, the whole source folder is fully included but that has never been a problem.
And that’s why you always ~~leave a note~~ recheck your .gitignore file before committing
Does Microsoft's GitHub offer any pre-receive hook configuration to reject commits pushed that contain private keys? Surely that would be a better feature to opt all users into rather than Windows Copilot.
Ehhh. I mean, I have local repositories that contain things that I wouldn't want to share with the world. Using git to manage files isn't equivalent to wanting to publish publicly on github.
I could imagine ways that private information could leak. Like, okay, say you have some local project, and you're committing notes in a text file to the project. It's local, so you don't need to sanitize it, can put any related information into the notes. Or maybe you have a utility script that does some multi-machine build, has credentials embedded in it. But then over time, you clean the thing up for release and forget that the material is in the git history, and ten years later, do an open-source release or something.
I do kind of think that there's an argument that someone should make a "lint"-type script to automatically run on GitHub pushes to try and sanity-check and maybe warn about someone pushing out material that maybe they don't want to be pushing to the world. It'll never be a 100% solution, but it could maybe catch some portion of leakage.
Users often don't take care to separate private and public environments. They just dump all their stuff into one and expect their brain to make the correct decision all the time.
Put your private data into a private space. Never put private data into a mixed use space or a public space.
e.g. Don't use your personal email at work. Don't use your personal phone for business. Don't put your passwords or crypto keys in the same github or gitlab account or even instance and don't reuse passwords and keys, etc.
Having plain text secrets, or having secrets at all in a repository is always a bad practice. Even if it's a super-duper private/local/no one will ever see this repo.
I have no sympathy for him, if he is a crypto developer he knows how important those private keys are. And he also knows people scrape public areas all the time looking for keys just like that. The whole point of crypto is to be immutable, so that money is simply lost to him now.
He seems to know how much of a dumb mistake that was, although his description of himself was a bit more colorful.
You're not wrong about how important those keys are and how he definitely should have known better. But I at least have a little sympathy for the guy. Everyone makes mistakes from time to time, even with important stuff. Hopefully they are lucky enough not to lose 40k on one but unfortunately he wasn't. Whether he should have known better or not, that just plain sucks.
The whole point of crypto is to be immutable, so that money is simply lost to him now.
IIRC there are several cases where some group of people lost big enough coins and force most of the miners to fork to get their money back. Not bitcoin though.
If that doesn't make everyone lose 100% trust in coins that do that, I don't know what will.
I can't believe someone would be so stupid and careless as to develop Web3 software.
They made 2 errors.
Use crypto
Storing the key anywhere close to the repo.
It must be automated for it to happen in 2 minutes. Which implies these kind of things happen often enough for someone to write a script for it.
Yes, it absolutely is automated.
There are bots running constantly looking for things that match patterns for exploitable credentials in public commits.
AWS credentials
SSH keys
Crypto wallets
Bank card info
If you push secrets to a public github repo, they will be exploited almost immediately.
The scanning part is definitely automated by many different actors (for the gains or the "lulz"), but being this fast, also automated key usage (account draining) must have been implemented which is a bit more impressive...
If it was a script I wrote, it would have successfully stolen the $40k, but also stolen my own money and deposit both sets of money into a second intended victims account because I forgot to clear a variable before the main loop runs again.
You always mess up some mundane detail!
It would have deposited the funds in an account "foobar123" and been lost forever
Oh yes absolutely, there are bots constantly crawling any open source code. A friend of mine accidentally leaked their discord API key, nuked a whole server within minutes.
There must be bots trolling GitHub for API keys, crypto secret keys, and other such valuable data
I'm sad I didn't see any comments saying he shouldn't be using a $40k wallet key to test his software in the first place. Anything could happen with simple code mistakes...just get an empty wallet or one with a few bucks in it.
If there was any sort of password / highly entropic string detection in their build pipeline it would have caught a wallet's keys. They aren't an excuse for lack of diligence, but they should still be in every pipeline where passwords or keys might have to get used.
I'm terrible about building pipelines for most of my personal projects though, so I'm throwing rocks from my glass house here.
I like your CI plan but maybe they just needed some sort of sane policy. Like never commit plaintext keys to any repo. Never work with a $40k key in a new project under development. Never convert a private repo to public.
There’s a reason we use .env files and put them in gitignore.
I use a text file version of a novel to back up my keys, then I store the key map in multiple cloud drives. For example, if the word is "lighting" then my key map for that word would be 487,5 (line 487, word 5). Easy to crack, if you know what novel I am using.
That's the copy protection on dozens of computer games from the 90s.
To get my codes you have to play Alone in the Dark 2, and have the original 2 sided playing cards, then translate that into Brittanic runes and find the latitude and longitude of the given city on a cloth map from the original Ulitma.
Well, I am a Gen-X'r.
What a muppet
Incredibly funny story, incredibly awful website.