Please, everyone, put your entire development environment in Github

Please, everyone, put your entire development environment in Github
0

Stop me if this sounds familiar...

You want to get started with a new framework/runtime. So you install said framework/runtime.


This is a companion discussion topic for the original entry at https://www.freecodecamp.org/news/put-your-dev-env-in-github/

Wonder how this works with Windows though - since docker on windows is host>hyper v vm>docker - that jump in the middle is tricky to navigate.

would probably be ideal to add in an ssh daemon to get a console open in the container that is a bit roomier than the 40 col / 20 row window in Code.

It is definitely a step in the direction I’d like to see “cool a new thing, checkout, it just works =)”

What happens when “latest version of the .NET Core SDK” introduces a change which breaks your app?
Your Dockerfile stops working. (It’s 3 years since you worked on this project and you’re not even sure anyone is still using it, are you going to fix it?)
Wouldn’t it be better to fix it to a known working version?
(A version, which a few more years in the future, includes a critical security vulnerability meaning your app can be exploited. Are you going to do the work to fix the breaking change above and the security release?)
These are the sort of issues that normally plague old projects.
Hiding this config away in Dockerfiles compared to having it up front in a README is going to mean a lot of frustration. If a README says “get .net core” and when I google that it says “.net core has vulns, use .net2” then straightaway I know to expect trouble and may try another app. If I use your old Dockerfile, I may be running versions with vulns if I didn’t check it.

Please don’t use “it’s in the Dockerfile” as a substitute for decent documentation. Something being in the dockerfile doesn’t explain its purpose or why it was chosen instead of a similar alternative or which specific version of something the package was tested with.
We all know how well “self documenting code” worked and this feels like a similar problem.

I agree with having Dockerfiles in your repos, to make it easier to get stuff up and running - not just for some framework, but for internal projects within a company. I work on a system like that at my present workplace and is fan-***-tastic that new people don’t go through hell in a handbasket just to get the privilege of running the services locally. We have typical big company IT restrictions, and can still use it.

However, I disagree with tying yourself so heavily to VS Code. I used Eclipse before my current job, now I use VS Code, and later on I’ll use something else. Not only that, but coders shouldn’t have an IDE forced upon them - if a given dev where I work now hates VS Code, they can use something else, nobody cares.

Burke, are you out of your freaking mind? GitHub is a breeding ground for piracy and data leaking, as evidenced in the Capital One data breach this past month. Unless you’re working on a purely open-source project, it makes no sense to develop in GitHub. Otherwise, you are essentially giving everyone permission to steal your code.

1 Like

This is awesome.
Obviously you need to substitute “Github” with “source control”. But the idea is… awesome.
The flow is really good. Just did an Azure function with typescript and a different container. No issues.

But’s the real pain of it all isn’t it? Unless you live near an Internet fiber-optic hub and have your own T3 connection (or whatever the modern equivalent is), you’ll have to put your files on some ISP’s servers. True, now you are trusting two outsiders with your code instead of one (GitHub + [Azure, AWS, Google Cloud, etc.]), but for at least most of us you’re going to have to ship your code to at least one other outsider.

Note, I share your view. I wish there was someone who offered something like a blind VM that accepted a completely encrypted Docker image for execution. But as far as I know there is no such thing.

Note-2: I did message GitHub and ask them if they intend to support an encrypted image server and they said “no”. Seems like an opportunity for some company to step in and offer this service where you do your work with decrypted source on your side, and GitHub (not the Git utility) only sees and stores encrypted payloads. As far as losing their source search on their web site, I never found that search to be very useful anyways.

You know, I don’t know that much about how Docker works on Windows. You could back that up even further and say that I don’t really know how Docker works anywhere.

As for the console row/col limits, I don’t think I follow. Are you saying that the integrated terminal in a remote-container limits you to 40 col?

If your project updates an SDK and you do that in the Dockerfile as well, then I’m going to get that update when I pull. I may need to rebuild the container, but I’ve got the updated spec.

But I see your point. What if we said “in addition to” the README then. Treat the “.devcontainer” like any other asset that needs documentation.

I agree with all of this. But I would say that supporting the most popular editor in a way that degrades gracefully is a good idea. We did this with browsers for a long time.

referring to the console/terminal window being a sub-window on the Code UI.
if using putty or other terminal that is the same size as the Code UI chrome - it is bigger :slight_smile:

depending on what you’re wanting to do you might want to run a bunch of commands and having tmux/screen session might be roomier to manage etc

plus the issue of Dockerfile s common pattern of apt update or equiv - only the images produced from Dockerfile are immutable the Dockerfile is still a source of works on my machine

I love this workflow. We have developed an integration to spin your remote dev environment directly in Kubernetes. Check it out here:

Yeah - you’re right. I just use “Github” like people use “Kleenex”. It’s a brand that has enough cultural weight to represent the market at large.

You know, someone internally at MSFT messaged me about deterministic compilation and combining that with remote containers, so the security concern is for real. It looks like there is a big market for secure source control, secure dev environments and an easy way to determine that binaries came from a specified set of source files.

1 Like

Agreed. I’m not trying to inject a trendy tech for no reason, but this could be a real use for using blockchain tech to maintain a database of publicly verified encrypted containers along with their checksums. Then an agent like GitHub could still act as the central hub for source code, while having now a publicly verifiable set of containers and their checksums that people could use and trust without getting into the difficult waters of SSL certificate chain checking in order to make sure the encrypted content has not been tampered with.

There could even be smart contract code that allowed the container/checksum authors to decide how much of the details concerning the code GitHub is allowed to see. (E.g. - nothing, or just the meta-data, or the entire decrypted content, etc.). Obviously IPFS would be used for the content and only the checksums would go on the blockchain, otherwise the storage costs would be unmanageable.

I used the remote container development feature in my blog post about .NET Core 3 error stuff. My hope is that it enables others to more easily experiment with the repo I created.

I haven’t experienced many issues with Docker on Windows, but WSL 2 should fix many of the issues, once it’s included in a stable Windows 10 release. It includes a Linux kernel that runs natively on Windows, and thus Docker will also run natively rather than using a VM. More details here:

The Linux kernel contains code for “containerization”, which is where you have a virtualized environment that uses the same Linux kernel as the host machine. You can run apps in the virtualized environment, where they’re “sandboxed” and can’t see other apps running on the same host. This OS-level virtualization results in much less overhead compared to full hardware-level virtualization (eg. VMWare or KVM) where an entire system is virtualized, but it does mean you can only run the same OS as the host. The system normally used on Linux is LXC, which is what Docker used in its earlier versions, but it uses its own system now.

On Windows, Docker does use a virtual machine at the moment. It creates a Hyper-V VM called “MobyLinuxVM” that runs a very minimal version of Linux, and all the Docker containers run in that.

In the future, Docker containers will run natively on Windows (thanks to WSL 2), leaving MacOS as the only major environment where Docker contains do not run ‘natively’.

the issue is usually to do with files being accessible, fast and correctly. It’s the first point brought up in the WSL 2 changes doc.

Whereas on linux there is no thinking about accessing files, it just works. Looking forward to WSL 2 though, slowly and surely we’ll get there :slight_smile: