Edit: seems like they fixed it, it works for me
My general contribution to the conversation is GitHub should have a donation system. Once a week, some kind of donation raffle happens, and the winner gets GitHub taken down for “reasons” for 4 hours, then 5, 6, 8. Microsoft profits more, and it slowly becomes a technology-and-money-induced vacation day.
Or and I know this sounds crazy, we (I actually mean you) collectively agree on laws that gives everyone a couple of paid vacation weeks a year.
This thread pivots hard from version control jokes into a somber discussion of the future of Minecraft.
I have found my people. You all are amazing.
Interesting - I’ve been retired a few years but the way we used github was git commit, git push, usually at the end of the day. How has the workflow changed so people constantly need it to do any work?
Unfortunately, the ecosystem around github has evolved so that most folks centralize their testing and deployment code into being executed on github infrastructure. Frankly a perversion of the decentralized design of git.
Fortunately for my team, it doesn’t matter because our process requires stuff that can’t be done from github infrastructure anyway, so we have kept the automatic testing and deployment on premise even as github is the ‘canonical’ place for the code to live.
Wow, that’s such a classic Microsoft approach - “Embrace and Extend.”
GitHub added CI/CD pipeline functionality (called GitHub Actions). If it’s down I can’t merge code or deploy code anywhere since company policy requires analysis builds to run, and our deploys use the GitHub Actions to ship the code.
GitHub actions is crazy convenient, but it’s a huge pain to run a copy locally. I try not to depend on it too much, but sometimes it is simplest to just go refill my coffee while it figures itself out.
(And it’s almost never down. This week was unusual, to me.)
I still use github for personal projects but have never looked into what the Actions do, since github serves my minimal needs as-is. But it also did when I was working. I would think if people find that depending on certain features ultimately disrupts their work, the smart thing would be not to use those features.
I would think if people find that depending on certain features ultimately disrupts their work, the smart thing would be not to use those features.
Yes. That would be wiser. But it would also mean setting up a Jenkins server.
No problem. Jenkiins! Get your ass in here!
I marvel at the proficiency with which Microsoft tears down every piece of software it touches nowadays.
Look what they just did to Notepad!!!
MONSTERS!!!
I’ll get downvoted for this, but I think they take good care of github and Minecraft. As for the rest though… not so good.
They deliberately removed code search for not logged in users almost immediately. Just recently they removed cloning without an account, so now updating my computer requires signing in to github.
They have been awful stewards.
If with “good care” you mean “the core functionality is up and running most times”, yes
My parents took good care of me, then.
Please take better care of yourself than your parents did! You deserve to feel taken care of <3
Better than Github did pre acquisition, and we actually got new features after years of stagnation. I don’t know what they changed but at least the product moved forward in some way.
… Didn’t they revoke the Minecraft licenses people purchased because they didn’t manage to migrate their Mojang accounts to Microsoft accounts in a short amount of time?
People were given three years to migrate, I wouldn’t quite call that short
People have absolutely taken a multi-year break from Minecraft before.
Really though, why is there a time limit at all? Google still allows you to convert old Youtube accounts to Google accounts, why can’t Microsoft do the same?
Lost access to my OG account because I didn’t find out about this until a month after it was too late.
On top of that, even if you did manage to migrate your account, the M$ Minecraft accounts get deleted without warning after some time (2 years?) of inactivity. Guess how I found that out.
The MS or MC account?
Because my MC account is very dead, while my MS is semi active.Edit: (Dead meaning not deactivated)
Oh yeah, Minecraft fans will tell you just how much they love their handling of it…
As a Minecraft player, as long as they leave java edition alone I’m fine with it.
I haven’t played Minecraft for a while, but I was under the impression that Microsoft was progressively turning the Bedrock version into a microtransaction hellscape. If I’d have to reluctantly commend Microsoft for anything, I’d rather go for Visual Studio Code.
The bedrock version is bad, but they have recently given everyone that owned one version of the game the other version for free and now sell both versions of the game for the price of one
Oh, yeah bedrock sucks. Java edition is still great though. And yes, VSCode is good as well.
Bedrock indeed, but you didn’t even have Bedrock edition before Microsoft, so you can’t really say MS fucked it over since it was always kinda bad. Java has been pretty nice and the “big content updates” direction under Microsoft really rejuvenated the game.
It’s nice that they brought Minecraft to practically every device. It sucks that they didn’t replicate redstone
Thats because Microsoft has refused to change anything meaningful, there are new mobs but they dont drop anything of value and there are new biomes but the blocks are all decorative. Microsoft knows they’ll screw it up so they only make surface level changes.
It works on my machine!
My company owns their infrastructure and we don’t have issues like this and our production servers are working like oiled machines and yet they want to move to 3rd party cloud services for reasons that have yet to be explained
a brief conversation:
Cloud good, very good for dynamic sizing up and down.
but sir we don’t need to scale up and down for our business.
but cloud good.
I’m worried that when the bean counters see the price difference between AWS and self hosted stuff they’ll find AWS more expensive and we will have to deliver a year’s work for 10 scaled agile teams again, but in our machines
I’m guess you have a fully staffed infrastructure team team, so the reason that has yet to be explained is that they want to downsize that team.
We use cloud services because we have never had a fully staffed infrastructure team.
The explanation is guys in marketing buying fancy lunches and rounds of golf for the guys in C-Suite (Source: A tired IT admin that has had to talk his management team off of this cliff due to fancy tech demo dinners from unsolicited cloud/software companies)
The fewer magic blackboxes are around, the
- (happy variant) easier it is to train new people and the less mental burden there is on existing staff
- (sad variant) easer it is to fire people.
What do they mean by “Carry On.”?
It’s already over. The guy in the left had both, the High Ground and the higher posture.
In this case it means “nevermind”.
He’s liable to get top-heavy and just fall over. Guy on the right has a nice center of gravity.
He sacrificed sure-footing for a killing stroke.
"But… but… My high ground 😭 "
~ Obi-Wan Kenobi
People forget git is a DVCS, you can send PRs to each other without relying on Github.
Wait what
Yeah dog pretty much everything on the github website is an interface to display info held in the .git folder of the website.
Thats how theres github, gitlab, gitea, gitlab, forgejo, etc etc. There are even applications you can download to visualize info in git that run on your local machine, and only see youe local filesystem.
Maybe what I misunderstood is where git ends and github starts. I know there are other hosting platforms, and I’ve used a lot of git visualizers. But what I’ve never tried to do is use git with multiple developers without connecting to some 3rd party server. Is there some peer to peer functionality built into git or did I totally misunderstand your original comment? Or are you literally sharing the git folder via network file system, thumb drive, etc?
Yes the original use case is sending patches back and forth on the Linux kernel mailing list
Git doesn’t have a concept of a preferred repository; your local copy is exactly as valid to git as a git server hosted on github.
The originally intended workflow as I understand it involved generating patches which would be shared via a mailing list.
In practice there will generally be a repository that’s considered “canonical” for a project, whether that’s the one on the computer of the lead maintainer or some hosted solution.
A basic git server is essentially just a repository owned by a restricted user with SSH access granted to maintainers.. This can allow users to push and pull from a centralised or semi-centralised repository in much the same way as GitHub.
Reliance on external services to build and test code is absolutely braindead design
It’s not like internal build servers are 100% reliable, scaleable and cheap though. Personally I’ve found cloud based build tools to be just a better experience as a dev.
Jesus Christ, can you not even conceive of the idea of building on your own machine?
I’m talking about in a professional environment. You basically need a team to manage them and have a backlog of updates and fixes and requests from multiple dev teams. If you offload that to something cloud based that pretty much evaporates, apart from providing some shared workflows. And it’s just generally a better experience as a dev team, at least in my experience it has been.
Honestly, no, you don’t need a team. It is good practice, but not necessary. I’ve worked at several companies where the production build was made from a tower under a desk or a server blade, or an iMac on a shelf, sometimes one guy knew how it worked, sometimes nobody did, sometimes the whole team did. In most cases, managed by the product’s dev team. IT just firewall-wrapped the crap out of them.
Not to discredit the main meta thread of “we don’t have to manage anything with cloud” vs “having management team” debate. Odd thing is, cloud prices are climbing so rapidly that the industry could shift back in a near future.
Bottom line for most business though: As long as the cost makes sense, why bother self-hosting anything. That’s really what it comes down to. A bonus too, as most companies like being able to blame other companies for their problems. Microsoft knows that, and profited greatly with Windows Server/Office/etc. for that very reason.
When your quarterly profits are dashed because an employee backed into your server room and turned on the halon fire suppression system and you gotta rebuild from scratch from month-old off-site tape backups, how do you write a puff piece to explain that away without self-blame or firing the very people that know how it all works?
When your quarterly profits are dashed because Microsoft’s source control system screwed up, you make a polite public “our upstream software partners had a technical error, we’ve addressed and renegotiated,” message, shareholders are happy, and customers are still stuck with a broken product, but the shareholders are happy.
Well yeah strictly you don’t, but the idea of having a single machine under someone’s desk as a build server managed by one person where you have multiple dev teams fills me with horror! If that one person is off and the build server is down you’re potentially dead in the water for a long time. Fine for small businesses that only have a handful of devs but problematic where you’ve multiple teams.
Bottom line for most business though: As long as the cost makes sense, why bother self-hosting anything. That’s really what it comes down to. A bonus too, as most companies like being able to blame other companies for their problems. Microsoft knows that, and profited greatly with Windows Server/Office/etc. for that very reason.
Yup, exactly this. Why waste resources internally when you can free up your own resources to do more productive work. There’s also going to be some kind of SLA on an enterprise plan where you can get compensation if there’s a service outage that lasts a long time. Can’t really do that if it’s self managed.
In a professional environment, I’ve never had remote-only build systems, with the exception of release signing of locked-down compiler licensing. Otherwise, there’s always been a local option.
Edit: is my personal experience wrong somehow?
No, that’s actually genius.
How else are you supposed to get random paid break-time, which the boss can’t stop you from even if a crunch is going on?
absolutely braindead design
You’ve clearly not worked at my company
Azure devops and pipelines but only that and nothing more (not allowed to deploy to azure/microsoft stuff)
ONLY deploy cf to Aws
write primarily c# for all services, even our websites (iis 7, cshtml)
only exception is a new mobile app which is written in React Native, but even that is more bloated than the windows 11 start menu. It’s the only exception.
Projects are generally so poorly maintained, we’re still using bootstrap 4, outdated framework versions. I know personally there’s a windows server 2003 chugging along somewhere.
“we know about this (medium) bug/vuln, we can work around it. Just add this new feature to the codebase” but imagine this times 100. I quietly fix the bugs because i wouldn’t be able to live with myself otherwise.
the projects are 95% boiler plate for the simplest of tasks (curl a thing and pass it to another service has about 40 different classes), no processing…
“Aws Q first” company where none of the developers actually get access to write code with. Explicitly forbidden from using copilot: “it’ll use our code for their training”… right. Won’t someone think of our flawless, industry standard code. Also, that’s not how that works.
security none existsnt. Aws security tools used to scream at you every time you open the aws console. Solution at the company was to restrict views to those pages so (most) people don’t see the security/vuln reports. To get reports, you’d have to ask cybersec.
most developers are in a constant state of burnout.
There’s more but i’d violate my NDA too much at that point.
we’re expected to hit 1/2 b gbp profit in couple years
i think we, the developers at our company, are the biggest clowns in the entire IT industry. And yeah, we’re reponsible for your gov ids & loan applications.
ggwp
security none existsnt. Aws security tools used to scream at you every time you open the aws console. Solution at the company was to restrict views to those pages so (most) people don’t see the security/vuln reports. To get reports, you’d have to ask cybersec.
Not going to lie, that is hilarious. And forget red flags, you have a whole squadron of semaphores right there.
Like I said, braindead
Sometimes our internal CI tools break and I can’t build either. I think GitHub actions syntax is actually valid in forgejo as well so I don’t really think it’s a problem.
Someday soon: Claude is down
What are you planning? Downing the Dutch songfestival singer Claude? /s
There’s a reason we value the local development environment.
You can run everything locally, the only use for the cloud environment is for CD.
I’ll be honest. I just enjoy seeing my auto updater script work whenever I push to main and the Web page updates itself. FEELS SO GOOD TO JUST DO A PUSH AND HAVE YOUR CHANGES UP IN 3 MINS.
Well yeah!
That’s the CD part :)
We’re rolling the same thing, except with all our cloud infrastructure, our code, and various integrations.
Automatic deployments are so great, as long as you trust your integration process and test suites.
ackshually you can run most of the CI locally
Don’t tell the boss, jerk.
Doesn’t matter if the mechanism that checks the repo and sends the trigger message to the runner is down.
how does that affect running the CI locally? I don’t mean triggering the cloud CI manually, I mean running the commands manually.
Ironically, I find myself writing more code when CI is broken and I don’t have to babysit it.