I joined HashiCorp in 2016 to work on Nomad and have been on the product ever since. Definitely a lot of feelings today. When I joined HashiCorp was maybe 50 people. Armon Dadgar personally onboarded us one at a time, and showed me how to use the coffee maker (remember to wash your own dishes!). There have been a lot of ups (IPO) and downs (BUSL), but the Nomad team and users have been the best I've ever gotten to work with.
I've only ever worked at startups before, but HashiCorp itself left that category when it IPO'd. Each phase is definitely different, but then again I don't want go back to roadmapping on a ridiculously small whiteboard in a terrible sub-leased office and building release binaries on my laptop. That was fun once, but I'm ready for a new phase in my own life. I've heard the horror stories of being acquired by IBM, but I've also heard from people who have reveled in the resources and opportunities. I'm hoping for the best for Nomad, our users, and our team. I'd like to think there's room in the world for multiple schedulers, and if not, it won't be for lack of trying.
I've had the incredible displeasure of having to maintain multiple massive legacy COTS systems that were once designed by promising startups and ultimately got bought by IBM. IBM turned every last one into the shittiest enterprise software trash you can imagine.
Every IBM product I've ever used is universally reviled by every person I've met who also had to use it, without exaggeration in the slightest. If anything, I'm understating it: I make a significant premium on my salary because I'm one of the few people willing to put up with it.
My only expectation here is that I'll finally start weaning myself off terraform, I guess.
> Every IBM product I've ever used is universally reviled by every person I've met who also had to use it
During my time at IBM and at other companies a decade ago, I can name examples of this:
* Lotus Notes instead of Microsoft Office.
* Lotus Sametime Connect instead of... well Microsoft's instant messengers suck (MSN, Lync, Skype, Teams)... maybe Slack is one of the few tolerable ones?
* Rational Team Concert instead of Git or even Subversion.
* Using a green-screen terminal emulator on a Windows PC to connect to a mainframe to fill out weekly timesheets for payroll, instead of a web app or something.
I'll concede that I like the Eclipse IDE a lot for Java, which was originally developed at IBM. I don't think the IDE is good for other programming languages or non-programming things like team communication and task management.
The green screens tend to be much quicker and more responsive than the web frontends that are developed to replace them.
I've seen a lot of failed projects for data entry apps because the experienced workers tend to prefer the terminals over the web apps. Usually the requirement for the new frontend is driven by management rather than the workers.
Which is understandable to me as a programmer. If it's a task that I'm familiar with, I can often work much more quickly in a terminal than I can with a GUI. The assumption that this is different for non-programmers or that they are all scared of TUIs is often a mistaken assumption. The green screens also tend to have fantastic tab navigation and other keyboard navigation functionality that I almost never see in web apps (I'm not sure why as I'm not a front end developer, but maybe somebody else could explain that).
I'll defend green screens all day long. Lots of people like them and I like them.
Everything else you listed I would agree with you about being terrible and mostly hated though.
Back in ... maybe 2005 or what, in our ~60 people family business, I had the pleasure to watch an accountant use our bespoke payroll system. That was a DOS-based app, running on an old Pentium 1 system.
She was absolutely flying through the TUI. F2, type some numbers, Enter, F5 and so on and so on, at an absolutely blistering speed. Data entry took single-digit seconds.
When that was changed to a web app a few years later, the same action took 30 seconds, maybe a minute.
Bonus: a few years later, after we had to close shop and I moved on, I was onboarding a new web dev. When I told him about some development-related scripts in our codebase, he refused to touch the CLI. Said that CLIs are way too complicated and obsolete, and expecting people to learn that is out of touch. And he mostly got away with that, and I had to work around it.
I keep thinking about that. A mere 10 years before, it was within the accepted norm for an accountant to drive a TUI. Inevitable, even. And now, I couldn't even get a "programmer" to execute some scripts. Unbelievable.
I was at a ticket window buying concert tickets a couple weeks ago and was surprised to see the worker using the Ticketmaster TUI / Mainframe interface. She flew through the screens. The same experience on the Ticketmaster website is awful.
Not just accountants. I remember watching fully “non-technical” insurance admin / customer service people play the green screen keyboard like they were concert pianists. People can cope with a lot when they have to.
There is a learning curve, but not coping. One of the crest things with terminal: with experience one can type ahead, even before the form fully opened one can type data, which is queued in the input buffer and work efficiently. In a modern GUI application a lot of time is wasted with reaching for the mouse, aiming and waiting for the new form to render. That requires coping with it
I had to interact with a windows software which allows you to collect data with a digital form. We used it to digitize paper based survey by mapping free form question to a choices list.
The best oart was that it was entirely keyboard driven. If you can touch type, you can just read the paper and type away. The job was mind numbing, but the software itself was great.
Case in point: the aforementioned accountant obviously hated the new GUI-based app, exactly because of what you said. Aiming the mouse, looking for that button, etc. slows you down.
Not only implement, but implement them consistently and making users aware.
Consistency is a thing. Old windows apps often followed a style guide to some degree, that was lost with web (while it's also hard, as styleguides differ between systems, like Windows and Mac) and wasn't ever as close as Mainframe terminal things where function keys had global effects.
Indeed. One of the things I keep having to tell younger people is: “webapps have no HIG!”
All of the major platforms have a HIG that tells developers how to maximize the experience for users. Webapps have dozens of ways to do things like “search”. Those who never developed for a platform with a HIG do not value it and keep reinventing everything.
I find it ironic that we developers prefer to use CLI because it's quick, efficient, stable, etc., but what we then deliver to people as web apps is quite the opposite experience.
It's what the default is. TUIs default to fast, stable, high-information-density, so you have to real work to make them otherwise. And I say this next part as primarily a front-end developer the past few years: web apps default to slow, brittle, too-much-whitespace "make the logo bigger" cruft, and it takes real work to make them otherwise.
At the end of the day most people are lazy and most things, including (especially?) things done for work, are low quality. So you end up with the default more often than not.
in my experience, many managers tend to try to dumb products down as much as possible, to make it work for the most people. the problem is that this, together with the usual bad ui/ux, makes the product inefficient to use, especially for power users.
then, every couple of years, a startup tries to carve out a niche by making a product that caters to power users and makes efficiency a priority. those power users adopt it and start to recommend it to other regular users. this usually also tends to work quite well because even regular users are smarter than expected, especially when motivated. thus the product grows, the startup grows and voila, a tech giant buys it.
now one of the tech giants managers gets the task to improve profits and figures out, the way to do this is to increase the user base by making the product easier to use. UX enshittification ensues, the power users start looking out for the next niche product and the cycle starts anew.
rule of thumb: if the manager says "my grandma who never used a computer before in her life must be able to use it", abandon ship.
An application I used to deal with was similar, but with a somewhat quirky developer, who would deliberately flip between positive/negative confirmation questions, e.g.:
- Confirm this is correct? (Yes=F1, No=F2)
- Would you like to make any changes? (Yes=F1, No=F2)
And maybe sometimes flip the yes/no F-key assignments as well.
In theory this was done to force users to read the question and pay attention to what they were doing, in practice, users just memorized the key sequences.
We had a tower of bable collapse, when we switched to web UI.
We gained a million things and lost a million things.
There was an era from around 1985 to early 2000s,
where a large majority of applications had a (somewhat) consistent UI,
based partially around MS-Windows, partially around some IBM 'common ui' design guide principles.
The hall-marks of it was
- keyboard navigation was possible
- mostly consistent keyboard nav
- common limited set of UI controls with consistent behaviour
- for serious applications, there was some actual thought related to how the user was supposed to navigate through the system during operation (efficiency)
Post-web and post 9/11, where web browser UI has infested everything,
we are now in a cambryan explosion of crayon-eating UI design.
It seems our priorities have been confused by important things like 'Hi George. I just noticed, that for the admin panels in our app, the background colours of various controls get the wrong shade of '#DEADBF' when loading on the newest version of Safari, can you figure out why that happens?'. 'Oh, and the new framework for making smushed shadows on drop-downs seems to have increased our app's startup time on page transitions from 3.7 seconds to 9.2 seconds, is there any way we can alleviate that, maybe by installing some more middleware and a new js framework npm module? I heard vite should be really good, if you can get rid of those parts where we rely on webpack?'
These days most web apps aren’t written to take advantage of the browser’s built-in tab navigation, and unless the dev is a keyboard user, they don’t even think to add it. This is largely the fault of React reinventing everything browsers already have built in, and treating accessibility as an afterthought. Bare metal web apps written in straight-up HTML do have decent tab navigation. They’re still not as snappy as a green terminal app, though. My first summer temp jobs during college were data entry, in the era when you might get a terminal app or a web app, and the old apps invariably had better UX.
>The green screens tend to be much quicker and more responsive than the web frontends that are developed to replace them.
Agree! Back in 2005, I was involved in a project to build a web front end as a replacement for the 'green screen' IBM terminal UI connecting to AS400 (IIRC). All users hated the web frontend with passion, and to this day, I do not see web tech that could compete in terms of data entry speed, responsiveness, and productivity. I still think about this a lot when building stuff these days. I'm hoping one day I'll find an excuse to try textualize.io or something like this for the next project :)
This only matters if "quick and more responsive" is the only thing that matters. Yes of course you can enter payroll timesheets on a TUI if you spend days/weeks/months gaining that muscle memory. The same way you can edit in vim much faster than vscode or Eclipse if you spend weeks/months/years gaining that muscle memory.
The fact that someone who has been doing it for years can do it faster is obvious, and pretty irrelevant.
Take someone who has never used either, and they'll enter data on the web app much faster.
You don't see keyboard nav in most web apps for similar reasons. First-time users won't know about it, there's no standard beyond what's built-in the browser (tab to next input, that kind of thing), and 90% of your users will never sit through a tutorial or onboarding flow, or read the documentation.
IBM eventually figured out that these products were terrible too, even if they saved money on paper; sold the Rational/Lotus/Sametime teams to an Indian competitor, and discontinued usage internally (I think, it's a big company).
I remember using Rational Clear case at my first job. Yeah, in that case count me in on the list of people that revile the IBM products they've had to use.
> I don't think the IDE is good for other programming languages or non-programming things like team communication and task management.
It works great for Python and C++, honestly. If you're a solo dev, Mylyn does a great job of syncing with your in-code todo list and issue tracker, but it's not as smooth as the IDE side.
However, its Git implementation is something else. It makes Git understandable and allows this knowledge to bleed back to git CLI. This is why I'm using it for 20+ years now.
Yeah I was just about to say this -- I used Sametime via Pidgin (I think it may still have been called Gaim back then) on my work Linux machine and it was actually quite nice.
My favourite Sametime feature within Pidgin was, well, tabs (I can't remember if the Windows client had tabs as well..?), which was revolutionary for an IM client in 2005.
But my secret actual favourite feature was the setting which automatically opened an IM window /tab when the other person merely clicked on your name on their side (because the Sametime protocol immediately establishes a socket connection), so you could freak them out by saying hello even before they'd sent their initial message.
DOORS is/was a requirement management tool and frankly speaking was crap but I have never seen another software as good and comprehensive in requirement management.
I expect it to be still used in aviation or army related domain, maybe pharma.
I think this is an interesting graph comparing web searches for "terraform alternative" and "opentofu". Notice the spike when the IBM rumors began, and the current spike now that the acquisition is complete?
CentOS was the downstream of RHEL, and much more people used it than RedHat/IBM knew or wanted to admit. I can argue that at least 90% of their users (by the number of installs) didn't even need any help to configure/troubleshoot that either.
But with a very IBM move and with some tunnel vision, they got triggered by the few people who abuse RedHat license model and rugpulled everyone. More importantly universities, HPC/Research centers and other (mostly research) datacenters which were able to sew their own garments without effort.
Now we have Alma, which is a clone of CentOS stream, and Rocky which tries to be bug to bug compatible with RHEL. It's not a nice state.
They damaged their reputation, goodwill and most importantly the ecosystem severely just to earn some more monies, because number and monies matter more than everything else for IBM.
Remember. When you combine any company with IBM, you get IBM.
> they got triggered by the few people who abuse Red Hat license model and rugpulled everyone
Alma is not a clone of CentOS Stream. You can use Alma just like you were using CentOS. It's really no different than before except for who's doing the work.
I agree that communication was bad. But why do you believe that Red Hat isn't able to screw up on their own?
I'll kindly disagree on this with you. Reading the blog post titled "The Future of AlmaLinux is Bright", located at [0]:
> After much discussion, the AlmaLinux OS Foundation board today has decided to drop the aim to be 1:1 with RHEL. AlmaLinux OS will instead aim to be binary compatible with RHEL.
> The most remarkable potential impact of the change is that we will no longer be held to the line of “bug-for-bug compatibility” with Red Hat, and that means that we can now accept bug fixes outside of Red Hat’s release cycle.
> We will also start asking anyone who reports bugs in AlmaLinux OS to attempt to test and replicate the problem in CentOS Stream as well, so we can focus our energy on correcting it in the right place.
So, it's just an ABI compatible derivative distro now. Not Bug to Bug compatible like old CentOS and current RockyLinux.
TL;DR: Alma Linux is not a RHEL clone. It's a derivative, mostly pulling from CentOS Stream.
> I agree that communication was bad. But why do you believe that Red Hat isn't able to screw up on their own?
Absorption and "Rebranding and Repositioning" of CentOS both done after IBM acquisition. RedHat is not a company anymore. It's a department under IBM.
Make no mistake. No hard feelings towards IBM and RedHat here. They are corporations. I'm angry to be rug-pulled because we have been affected directly.
Lastly, in the words of Bryan Cantrill:
> You don't anthropomorphize your lawnmower, the lawnmower just mows the lawn, you stick your hand in there and it'll chop it off, the end.
> Absorption and "Rebranding and Repositioning" of CentOS both done after IBM acquisition. RedHat is not a company anymore. It's a department under IBM.
You're wrong. CentOS Stream was announced September/October 2019, too close to the IBM announcement to be an IBM decision; it had been in the works for quite some time before, and in fact this all started in 2014 when Red Hat acquihired CentOS.
From 2014 to ~2020 you were under the impression that nothing had changed, but Red Hat had never cared about CentOS-the-free-RHEL. All that Red Hat cared about was CentOS as the basis for developing their other products (e.g. OpenStack and OpenShift), and when Red Hat came up with CentOS Stream as a better way to do that, Red Hat did not need CentOS Linux anymore.
Anyhow, I've been through that and other stuff as an employee, and I'm pretty sure Red Hat is more than able to occasionally fuck up on its own, without any need for interference from IBM.
Bug for bug is a sham and always was. It's a disservice to users to only clone something.
Underneath it all, compatibility is what matters. At AlmaLinux we still target RHEL minor versions and will continue to do so. We're a clone in the sense of full compatibility but a derivative in the sense that we can do some extra things now. This is far, far better for users and also let's us actually contribute upstream and have more of a mutually beneficial relationship with RH versus just taking.
Sometimes the hardware or the software you run requires exact versions of the packages with some specific behavior to work correctly. These include drivers' parts on both kernel and userland, some specific application which requires a very specific version of a library, so on and so forth.
I for one, can use Alma for 99% of the time instead of the old CentOS, but it's not always possible, if you're running cutting edge datacenter hardware. And when you run that hardware as a research center, this small distinction cuts a lot deeper.
Otherwise, taking the LEAPP and migrating to Alma or Rocky for that matter is a no-brainer for an experienced groups of admins. But, when computer says no, there's no arguing in that.
We don't change the expected versions. We might patch/backport more to them if there are issues, but the versions remain.
Basically the goal is still to fit the exact situation you just brought up. I'm not aware of this ever not being the case if it weren't to be the case for some reason, then we have a problem we need to fix.
All of the extra stuff we do, patch, etc. is with exactly what you just stated in mind.
I'll be installing a set of small servers in the near future. I'll be retrying Alma in a couple of them, to give it another chance.
As I said, in some cases Rocky is a better CentOS replacement than Alma is.
But to be crystal clear, I do not discount Alma as a distribution or belittle the effort behind it. Derivative, clone or from scratch, keeping a distro alive is a tremendous amount of work. I did it, and know it.
It's just me selecting the tools depending on a suitability score, and pragmatism. Not beef, not fanaticism, nothing in that vein.
Sustainability is one of the core reasons why we are not using RHEL SRPMs to build AlmaLinux. RH doesn't want us doing that, and doing so would be unsustainable and bring into question the future of AlmaLinux as it can, and likely will, turn into a game of cat/mouse getting those SRPMs :)
Red Hat bringing CentOS in-house (well before IBM entered the picture) was IMO one of the first in a string of expedient decisions that were... unfortunate. When I was at Red Hat I loudly argued against some of the ways things were handled but I also understand why various actions were taken when they were.
I'd also argue that CentOS classic was mostly bug for bug compatible but probably close enough for most. It shared sources but did use a different (complex) build system as I understand it.
That closeness allowed CentOS to be a drop-in replacement for RHEL for thousands of installations and exotic hardware combinations. Unfortunately, we don't have this capability anymore. Rocky bears most of that load now.
They are completely different products just reusing branding to confuse what people are asking for.
RHEL Developer is closer, as a no-support, no-cost version of RHEL, but you still have the deal with the licence song and dance.
CentOS gave folks a free version that let you run some dev environments that mostly mirrors prod, without worrying about licences or support. CentOS stream doesn't do this out of principle. It's upstream.
But for all practical purposes, that is dropping CentOS. They completely changed the identity of the product, so the fact it has the same branding isn't going to placate anyone.
Companies are often brought in and told that nothing will change, and as long as they can pull their weight, this may be true. IBM seems a pretty diversified company, and there RedHat doing 5% of the total revenue may not be too bad. I don't know how well RedHat is doing commercially, but a few bad quarters could draw negative attention of the sort where upper-management wants to start messing with you, seek more synergy, efficiency, alignment. Being a much smaller small company within Verizon, having been left alone for a little while, we were then told that The Hug was coming. It did. We didn't grow to be their next billion dollar business unit (as no surprise to anyone in our little company), nor were we able to complement other products (ha! synergy!) and we were shuttered. At some point... engineering will notice.
RHEL has had no significant investment to keep it from becoming irrelevant in the next five years. The datacenter and deployments of linux have changed so rapidly (mostly due to the new centralization and homogeneity of infrastructure investment) that RHELs niche is rapidly shrinking.
Image mode RHEL is a pretty significant investment.
Apart from that, in terms of keeping RHEL relevant, most of the attention is on making it easier to operate fleets at scale rather than the OS itself. Red Hat Insights, Image Builder, services in general, etc.
Those are the key things that would keep it competitive against Ubuntu, Debian, Alma, Oracle etc.
We don’t run anything on bare metal anymore it’s all containers (90k employee very large enterprise).
Of course I can’t speak for all the teams, but all new projects are going out on kubernetes and we don’t care about rhel at all, typically it’s alpine it Debian base images
Unfortunately IBM is going to ruin everything that was good about working for Hashicorp and eventually everything that was good about Hashicorp products.
I worked for a company acquired by IBM, and we held hope like you are doing, but it was only a matter of time before the benefit cuts, layoffs, and death of the pre-existing culture.
Your best bet is to quit right after the acquisition and hope they give you a big retention package to stay. These things are pretty common to ease acquisition transitions and the packages can be massive, easily six figures.
Then when the package pays out you can leave for good.
none of that has happened for us at Red Hat. Other than the one round of layoffs which occurred at the time that basically every tech company everywhere was doing much larger layoffs, that was pretty much it and there's no reason to think our layoffs wouldn't have been much greater at that time if we were not under the IBM umbrella.
Besides that, I dont even remember when we were acquired, absolutely nothing has changed for us in engineering; we have the same co-workers, still using all Red Hat email / intranets / IT, etc., there's still a healthy promotions pipeline, all of that. I dont even know anyone from the IBM side. We had heard all the horror stories of other companies IBM acquired but for whatever reason, it's not been that way at all for us at least in the engineering group.
Former Hatter here (Solution Architect Q2 '21 -> Q4 '22). Other than the discussions that took place around moving the storage/business products and teams under IBM (and the recently announcement transfer of middleware), I wouldn't have expected engineering to do that much interfacing with IBM. At most, division leadership maybe (this is just personal speculation). Finance and Sales on the other hand... quite a bit more.
We had a really fun time where the classic s-word was thrown around... "s y n e r g y". Some of the folks I got to meet across the aisle had a pretty strong pre-2010 mindset. Even around opinions of the acquisition, thinking it was just another case of SOP for the business and we'd be fully integrated Soon™.
They key thing people need to remember about the Red Hat acquisition is that it was purely for expertise and personnel. Red Hat has no (or very little) IP. It's not like IBM was snatching them up to take advantage of patents or whatnot. It's in their best interest to do as little as possible to poke the bear that is RH engineering because if there was ever a large scale exodus, IBM would be holding the worlds largest $34B sack of excrement we've seen. All of the value in the acquisition is the engineering talent and customer relationships Red Hat has, not the products themselves. The power of open source development!
It's heartening to hear that your experience in engineering has been positive (or neutral?) so far. Sales saw some massive churn because that's an area IBM did have a heavier impact in. There were some fairly ridiculous expectations set for year-over-year, completely dismissing previous results and obvious upcoming trends. Lost a lot of good reps over that...
Red Hatter since 2016, first in Consulting, now in Sales.
Oh the “synergy” rocket chat channel we had back then…
Things have been changing, for sure. So has the industry. So have our customers. By and large, Red Hatters on the ground have fought hard to preserve the culture. I have many friends across Red Hat, many that transitioned to IBM (Storage, some Middleware). Folks still love being a part of Red Hat.
On the topic of ridiculous expectations…there’s some. But Red Hatters generally figure out how to do ridiculous things like run the internet on open source software.
FWIW, the change at Red Hat has always been hard to separate between the forces of IBM and the reality of changing leadership. In a lot of ways those are intertwined because some of the new leadership came from IBM. Whatever change there was happened relatively gradually over many years.
Paul Cormier was a very different type of CEO than Jim Whitehurst for sure. But that's not an IBM thing, he was with Red Hat for 20 years previously.
I agree with you FWIW. The company also basically doubled in size from 2019 to 2023. It's very hard to grow like that and experience zero changes. And COVID happened shortly after so that also throws a wrench into the comparisons.
The point is, it's hard to point to any particular decisions or changes I disliked and say "IBM did that"
I do miss having Jim Whitehurst around. Jim spent 90 minutes on the Wednesday afternoon of my New Hire Orientation week with my cohort helping to make sure all of us could login to email and chat, answering questions, telling a couple short stories. He literally helped build the Red Hat culture starting at New Hire. Kind of magical when the company is an 11K person global business and doing 5B in revenue.
Cormier and Hicks have their strengths. Hicks in particular seems to care about cultural shifts and also seems adept at identifying key times and places to invest in engineering efforts.
The folks we have imported from IBM are hiring folks that are attempting to make Red Hat more aggressive, efficient, innovative. Some bets are paying off. More are to be decided soon. These kinds of bets and changes haven’t been for everyone.
>The company also basically doubled in size from 2019 to 2023. It's very hard to grow like that and experience zero changes.
Longtime Red Hatter here. Most of any challenges I see at Red Hat around culture I attribute to this rapid growth. In some ways it's surprising how well so many relatively new hires seem to internalize the company's traditional values.
Yeah, when I left I think there were something like 7x the number of people than when I joined. You can't run those two companies the same way no matter who is in charge.
We use Nomad where I work and we LOVE it. Previous to Nomad we used K8s for several years which, at that point, allowed us to become cloud agnostic. With the move to Nomad about 3+ years ago, we were able to transition away from cloud and back to leased, bare metal machines. During our time with K8s, it didn't have a good bare-metal strategy with their ingress mechanism. In contrast, as we investigated Nomad, it was easy to deploy on pure metal without a hypervisor. The result of our migration to Nomad was having so many capable and far-less-expensive hosting options. Lastly, as part of our Nomad control plane, we also adopted Vault and Consul with great success.
I know there are horror stories around this acquisition and lots of predictions about what will happen, but only time will tell. On a minimum, it has been a delight to use the Hashicorp software stack along with the approach they brought to our engineering workflow (remember Vagrant?). These innovations and approaches aren't going away.
I would GTFO, IBM ain't your friend and ain't your savior and are unlikely to invest and the worse may come with increasing IBM management sticking their fingers in the pie. The folks who did well out of this already know, they have the checks to cash if that was your take away congratulations. Otherwise find another opportunity. If nothing else look around and find out what you are worth on the market and then have that hard discussion soon with HashiCorp/IBM.
I worked with a bunch of people who had worked at a startup that got bought by IBM. As the other commenters attested, they too experienced that IBM is not the kind of company that's going to turn on the investment taps.
There are worse companies to get bought by, but if you've only ever worked at startups then you're not likely to enjoy what this becomes.
I spoke with a guy (too long ago) that was a "genius architect" and worked for a company that was small enough, that he got to implement his castles in the air. Knowing him, they might have been quite good, but it was one person that knew the details and made changes at the architect scale. He had a quirky way of thinking.
When IBM acquired that company, after a few weeks, this guy had a meeting with new engineering people. The very first meeting, they changed things for him. Instead of a single winding road of development, they wrote out a large spreadsheet. The rows were the distinguishable parts of his large'ish and clever architecture; the columns were assignments. They essentially dismantled his world into parts, for a team to implement. He was distraught. He didn't think like that. They did not discuss it, it was the marching orders. He quit shortly afterwards, which might have been optimal for IBM.
If you are a good at your job and want to deliver fast then you need to adapt to changing circumstances and continue on. Nothing wrong if you can’t but I have learnt that’s how you play along to deliver your best continuously.
Hey on a personal note, dealing with you and your team on Nomad's GitHub issue tracker was always a good experience. I hope nomad still has a future under IBM's roof.
Fairly large Nomad Enterprise user here, and I just want to say thanks for all of the work you and the team put in. I'm a big fan of Nomad and really appreciate the opportunities it has afforded me.
Regardless of the general sentiment, hoping for the best outcome for all of you.
I just wanted to say thank you for your work on Nomad. It's one of the most pleasant and useful pieces of software I have ever worked with. Nomad allowed us to build out a large fleet of servers with a small team while still enjoying the process.
Just wanted to say thanks for your work on Nomad. Amazing tool that had me rethink a lot of things about how i work with infra and software in general and is always pleasant to work with by itself.
No matter how long you worked at the acquiree and instrumental you were, be prepared for your opinions to be overridden by IBM lifers because you're not "true blue" (ie. directly to IBM). Also prepare for the bluewashing!
There are no resources and opportunities after being acquired by IBM. I worked for Red Hat when they were acquired. Our former CEO was quickly shown the door. We were making so much profit, almost a $1B in quarterly revenue. I left not long after the acquisition. Not long after I left, they laid off a bunch of staff.
No matter what they tell you, your day to day will not improve. For my area, it was mostly business as usual, but a net decrease in comp because IBM's ESPP is trash.
I have found the experience very different than what the OP's experience is.
As you know the layoffs that happened were around the same time as the rest of the industry layoffs were happening (fashion firing), I don't feel like it had a significant effect on the culture.
I am fully remote though, and have been for 15 years.
What part of my experience did you find different than your own? I said the day to day was mostly the same, minus the decrease in comp. I mostly was trying to articulate that the idea that IBM is going to 'super power' Hashicorp is not real, despite what IBM says.
A lot of what was communicated during the acquisition process was how IBM was going to super power Red Hat and help Red Hat grow into an even larger entity, and how Red Hat actually need IBM to survive.
Hashicorp's stuff always struck me as pretty hacky with awkward design decisions. For Terraform (at least a few years ago) a badly reviewed PR could cause catastrophic data loss because resources are deleted without requiring an explicit tombstone.
Then they did the license change, which didn't reflect well on them.
Now it's being sold to IBM, which is essentially a consulting company trying to pivot to mostly undifferentiated software offerings. So I guess Hashicorp is basically over.
I suspect the various forks will be used for a while.
> For Terraform (at least a few years ago) a badly reviewed PR could cause catastrophic data loss because resources are deleted without requiring an explicit tombstone.
There have been lifecycle rules in place for as long as I can remember to prevent stuff like this. I'm not sure this is a "problem" unique to terraform.
IIRC, the lifecycle hook only prevents destruction of the resource if it needs to be replaced (e.g. change an immutable field). If you outright delete the resource declaration in code then it’s destroyed. I may be misremembering though
The Google Cloud Terraform provider includes, on Cloud SQL instances, an argument "deletion_protection" that defaults to true. It will make the provider fail to apply any change that would destroy that instance without first applying a change to set that argument to false.
That's what I expected lifecycle.prevent_destroy to do when I first saw it, but indeed it does not.
This is not a terraform problem. This is your problem. Theoretically, you should be able to recreate the resource back with only a downtime or some services affected. You should centralize/separate state and have stronger protections for it.
I think the previous post is saying a resource removed from a configuration file rather than an invocation explicitly deleting the resource in a command line. Of course if it’s removed from the config file, presumably the lifecycle configuration was as well!
Yeah, that's a legit challenge that it would be great if there was a better built-in solution for (I'm fairly sure you can protect against it with policy as code via Sentinel or OPA, but now you're having to maintain a list of protected resources too).
That said the failure mode is also a bit more than "a badly reviewed PR". It's:
* reviewing and approving a PR that is removing a resource
* approving a run that explicitly states how many resources are going to be destroyed, and lists them
* (or having your runs auto approve)
I've long theorised the actual problem here is that in 99% of cases everything is fine, and so people develop a form of review fatigue and muscle memory for approving things without actually reviewing them critically.
I find this statement to be technically correct, but practically untrue. Having worked in large terraform deployments using TFE, it's very easy for a resource to get deleted by mistake.
Terraform's provider model is fundamentally broken. You cannot spin up a k8s server and then subsequently use the k8s modules to configure the server in the same workspace. You need a different workspace to import the outputs. The net result was we had like 5 workspaces which really should have been one or two.
A seemingly inconsequential change in one of the predecessor workspaces could absolutely wreck the later resources in the latter workspaces.
It's very easy in such a scenario to trigger a delete and replace, and for larger changes, you have to inspect the plan very, very carefully. The other pain point was I found most of my colleagues going "IDK, this is what worked in non-prod" whilst plans were actively destroying and recreating things, as long as the plan looked like it would execute and create whatever little thing they were working on, the downstream consequences didn't matter (I realize this is not a shortcoming of the tool itself).
What happens if you forget the lifecycle annotations or put them in the wrong place or you accidentally delete them? Last time I checked it was data loss, but that was a few years ago.
The same as in any other language when what you wrote was not what you intended? Sorry, I’m really confused what your complaint here is or how you’d prefer it to work. If you make a sensitive resource managed by any kind of IAC, of course the IAC can destroy it in a manner that would result in irretrievable data loss. The language has for forever put semantics in place to prevent that, and I’m not sure as a power user I’d want it any other way, I’m explicit with what I want it to do and dont want it making crazy assumptions that I didnt write.
like, what happens if you forget to free a pointer in c? sorry for snark but there are an unbelievably numerous amount of things to complain about in tf, never heard this one.
> what happens if you forget to free a pointer in c?
Assuming you mean 'forget' to free malloc'd space referenced by at least one pointer, that's an easy one .. it's reclaimed by the OS when the process ends.
Whether that's a bad thing or not really depends on context - there are entire suites of interlocked processing pipelines built about the notion of allocating required resources, throughputing data, and terminating on completion - no free()'s
surely my salient point is recognized regardless of semantics, but thanks for the correction. To use another example in another post - what happens if you DROP TABLE in sql?
DROP TABLE is explicit. Inadvertently removing a line from a config file and having Postgres decide to automatically "clean up" that "unneeded table" would be a more apt analogy.
"What happens if I turn a table saw on and start breakdancing on it?"
Of course you're going to hurt yourself. If you didn't put lifecycle blocks on your production resources, you weren't organizationally mature enough to be using Terraform in production. Take an associate Terraform course, this specific topic is covered in it.
I'm not familiar with every lifecycle argument but I don't know of any that prevent resources being destroyed if they are removed from the tf file (what the parent was talking about). prevent_destroy, per docs, only applies as long as the resource is defined.
I think the only way to avoid accidentally destroying a resource is to refer to it somewhere else, like in a depends_on array. At least that would block the plan.
>I don't know of any that prevent resources being destroyed if they are removed from the tf file (what the parent was talking about).
Azure Locks (which you can also manage with Terraform), Open Policy Agent, Sentinel rules, etc. will prevent a destroy even if you remove the definition from your Terraform codebase. Again, if you're not operationally mature enough, the problem isn't the tool, it's you.
"Operationally mature" is code here for "the gun starts out loaded and pointed at your foot". It's fine to point out that that's a suboptimal design for a tool.
>Operationally mature" is code here for "the gun starts out loaded and pointed at your foot"
No, it's code for "don't build a load bearing bridge if you don't understand structural engineering."
> It's fine to point out that that's a suboptimal design for a tool.
This isn't "suboptimal" though. If you delete a stored procedure in your RDBMS and it causes an outage, it's not because SQL/PostgreSQL is suboptimal. Similarly if you accidentally delete files from your file system, it's not because file systems are "suboptimal". It's because you weren't operationally mature enough to have proper testing and backups in place.
An easy way to get someone to admit that terraform is a hacky child’s language is to ask how to simply print out the values of variables and resources you are using in terraform easily. This basic programming language 101 functionality is not present in the language
If you mean somehow printing things when the configuration is being applied... I think you just need to understand that it's neither a procedural language (it's declarative) nor general-purpose (it's infrastructure configuration).
Declarative language can absolutely print out what it does know at the time. Which of course won’t be everything. But if I’m taking an input and morphing it at runtime like looping or just moving the information around in a data structure which terraform absolutely allows you to do, the runtime of terraform has all that information. I just can’t get it out.
Plus, there are many times I don’t want to have to use the REPL. Maybe I’m in CI or something. The fact that I cannot iterate over values of locals and variables easy to see what they are in say, some nested list or object, easily and just print out the values as I’m going along for the things terraform does know is just crappy design
Kubernetes is not like the others in that list because it remains a declaration of intended state. There are for sure no "if", "loop", or even variables in the .yaml files. You may be thinking of the damn near infinite templating languages that generate said yaml, or even Kustomize that is JSONPatch-as-a-Service. GHA is not like the others because it is an imperative scripting language in yaml, not a "configuration language"
You have to be able to actually specify the output. And that does not handle all use cases. And it has requirements on how it can be run. And it takes the full lifecycle of the plan. And it won’t work in many circumstances without an apply.
So no. Terraform has the information internally in many cases. There’s just no easy way to print it out.
I concur. I looked pretty hard into adapting Serf as part of a custom service mesh and it had some bonkers designs such as a big "everything" interface used just to break a cyclic module dependency (perhaps between the CLI and the library? I don't recall exactly), as well as lots of stuff that only made sense if you wanted "something to run Consul on top of" rather than a carefully-designed tool of its own with limited but cohesive scope. It seemed like a lot of brittle "just-so" code, which to some extent is probably due to how Go discourages abstraction, but really rubbed me the wrong way.
My hot take is just that Vault isn't a good solution, and the permissions model is wholly inadequate.
Except for not "feeling" secure, the only thing everyone wants is a Windows AD file share with ACLs.
Just no one realises this: all the Vault on disk encryption and unsealing stuff is irrelevant - it's solving a problem handled at an entirely different level.
Sorry HashiCorp, been there and got the Tee-shirt (pink) :)
Actually for me, the company I was at that IBM purchased was on the verge of folding, so in that case, IBM saved our jobs and I was there for many years.
We experienced arbitrary layoffs in 2023, followed by an ominous feeling that more layoffs were imminent. However, the announcement of a deal changed the situation.
Now, we are actively hiring for numerous positions.
Personally, I am not planning to stay much longer. I had hoped that our corp structure would be similar to RedHat, but it seems that they intend to fully integrate us into the IBM mothership.
I really wanted to work at HashiCorp in 2017/2018 and did five interviews in one day only to get ghosted[1]. That experience soured me on HC and its tools but I still admired them from afar.
I used to work at HashiCorp, and was a hiring manager. I know there's reasons why candidates might get given vague answers on why we're not proceeding, but I'd have been horrified to learn someone we interviewed got ghosted. Someone who was so far into the process that they did five interviews?! Inexcusable.
I'm so sorry that happened to you :( I hope you found somewhere else that filled you with excitement.
What do you expect is the reason this happens? I would suspect your skill assessment after a handful of interviews is sound and most people liked you. Do you think you just run into a person eventually that doesn't vibe?
I read this as the GHOSTING is the thing that bother them. After a full day of interviews, it sounds like. The failure to be hired doesn't sound like it bothers them to me.
Who knows? I wish the hiring team remembered that real-life people are looking for work because they have bills to pay and regular communication is necessary.
Two months ago a founder reached out to me, gave me a coding project, I completed it (and got paid!), spoke with his co-founder, and then...nothing. At least I got paid but man, YOU reached out to ME. I don't get it.
If the company gets 30 applicants, 10 of which go to the final round and 7 of those are really good, if they only have 5 openings then 2 really good applicants are not getting offers.
I ended up having to move out of my hometown (Boston) to stay with my wife's friend's family and now we live in CA. I have a delicious loquat tree in my backyard so things worked out, haha!
Red Hat has been a very atypical approach. There has been some swapping of teams back and forth but, as far as I can tell (been out of it for a while), Red Hat is still quasi-independent. Still lots of changes (probably most notably because of a lot of growth) but strategic Red Hat areas still seem to be pretty independent.
Broadly independent but filled to the gills with folks who spent a decade or more at IBM before landing at Red Hat. While this has been true of rank and file for years, recently it’s true on the c-suite.
Was probably truer of middleware than other areas. (Which I gather is largely going over to IBM.) Linux had a very significant DEC legacy. OpenShift was essentially greenfield from a startup acquisition (that got totally rewritten for Kubernetes anyway) and I'm not sure I would characterize people in that area as broadly coming from any particular large vendor.
> Red Hat is still quasi-independent. Still lots of changes (probably most notably because of a lot of growth) but strategic Red Hat areas still seem to be pretty independent.
Yes the rules have changed, seems the idea is to get big fast increasing revenue without regard to profits, and eventually have a great IPO, then one of the following:
1. hope you can sucker someone into buying the company
2. keep the VC $ flowing and continue growing, then loop to # 1
3. worse case, need to start making a profit and hope you can survive until # 1. If #1 does not happen, pray(?).
During this time, the founders are pulling in a great salary.
As it happened with the other startups that were acquired by IBM, this too shall pass through the digestion system of the dinosaur and ejected out as a dump. Hashicorp products are showing the signs of a legacy thing already. IBM is the nursing home for these sort of aging stuff.
I'm a heavy user of Terraform and Vault products. Both do not belong to this era. Also worked for a startup acquired and dumped by IBM.
What are the modern equivalents? For Terraform I'd imagine it's Pulumi or OpenTofu but what is it for Vault? Last I checked OpenBao didn't seem to have much juice but it's been a minute since I did so. Or are there unrelated projects in this space that are on the same trajectory as Hashicorp was a decade ago?
Most of the these type plays the home page has stacked toolbars / marketing / popups / announcements from the parent company and their branding everywhere (IBM XXX powered by Redhat)... I see very little IBM logo or corporate pop-up policy jank on redhat.com.
Nice. When I opened their homepage, I could not find anything obvious that shows they are owned by IBM. Literally, I had to search the HTML source code to find the sequential characters "IBM"!
IBM acquired SoftLayer in 2013 and the bluewashing didn't reach a fever pitch until 2019 or so. Also, the pandemic slowed things down at an already dinosauric company. IBM is over a hundred years old. I have faith that it will get around to entirely ruining Red Hat sooner than later.
In some ways to me it feels like a turning point for the GFC/ZIRP thru COVID era of tech companies with no path to profit.
After the haze of the LLM bubble passes, I hope startups have an exit strategy other than "we'll just get 0.01% of users to pay 6+ figures for support" or "ads".
Good tech deserves a good business model such that it can endure for the long term.
Yeap we did. I wrote it off around the time of the licence change, just after they decided to ditch the TF Team plan in favour of the utterly ridiculous “Resources Under Management” billing model.
I knew the company had lost the plot at that point.
Who's the target audience for this pricing that can afford this? The RUM pricing is indeed quite ridiculous.
It feels quite ridiculous, especially if you are managing "soft" resources like IAM roles via Terraform / Pulumi. At least with real resources (say, RDS instances), one can argue that Terraform / Pulumi pricing is a small percentage of the cloud bill. But IAM roles are not charged for in cloud, and there are so many of them (especially if you use IaaC to create very elaborate scheme).
There is an argument to be made that price-sensitive customers are a neglected market. Granted, marketing to them is very different - they're prone to being scooped if someone comes by willing to sell your same product to them at a loss (hi, Amazon and Walmart) - but there are a lot more of them and you're not fighting every startup on the planet for the same handful of clients.
Business have made a killing in China and India for a reason, after all.
+ There’s an argument against every rule of thumb.
+ For what it is worth, the just-one-percent-of-all-Chinese is historically a poor business strategy.
+ As you point out, targeting price sensitive customers puts you in competition with Walmart and Amazon. Not only that but you are competing for their worst customers.
you're not fighting every startup on the planet for the same handful of clients
Not having access to good clients/customers suggests the business idea might not be viable. Chasing money from people without the wherewithal or will to pay, does not make your business idea viable.
That's fair. I wasn't coming for you and I'm certainly not trying to fight you from some kind of authority - I'm definitely not a businessperson.
The only point I was trying to get across is that even "bad" customers are still customers, and that there's still a lot of money to be made meeting people's needs doing the work others don't want to do. I feel like this applies from the bottom of the socioeconomic ladder all the way to the top - that's all. Perhaps I should've made that clearer, and that's on me.
An unsolicited side note: I think the bristling to this post was because of the language you were using. Talking about the poor as if they were to be discarded made you look a bit as if you have no empathy, which might not be fair to you. I get it - business require being hard-hearted if you want to get ahead because if you don't make tough decisions, someone else will - but it probably wasn't your best look, you know?
Talking about the poor as if they were to be discarded
The context was Hashicorp pricing for a web service, I was not talking about the poor.
Not being able to afford a B2B service is not an injustice.
there's still a lot of money to be made meeting people's needs doing the work others don't want to do. I feel like this applies from the bottom of the socioeconomic ladder
Are you betting your breakfast on walking your talking?
even "bad" customers are still customers
That’s why I don’t recommend going out to find them. They tax your ability to provide high quality. You will have enough problems without trying to get lava from a turnip.
it probably wasn't your best look, you know
For better or worse, it’s not going to keep me up grieving on long winter nights.
Good for who? Good for people getting bonuses? Good for executives?
It doesn't seem to be good for the customers or the people using the software or the people contributing to the open source code. It also doesn't seem to have been good for the investors, looking at the other comments.
It also creates horrible incentives. Oh I won't run this in isolated project or under a separate service account since that costs more, let's just pile everything together.
I’m finding that the basic backend functionality of Pulumi and Terraform managed cloud is fairly easy to build (especially Terraform, I can’t quite believe how absurdly simple their cloud is…)
It was made apparent right after the IPO. Our team got a new VP in charge who changed the mantra from practitioner-first to enterprise-first. Soon after they then laid-off anyone not working on enterprise features. It was a sad death of a great company culture. Mitchell left around the same time which, IMO, speaks volumes.
The older I get, the more I'm convinced that practitioner-first is the only reasonable way to drive a product's features, while enterprise-first is the only reasonable way to drive a company's revenue.
Which is to say strong sustainable products need both.
... but ffs don't let the entire company use enterprise as a reason to ignore practitioner feature requests.
This is probably inaccurate, but it seemed like they wrote it off as a safe move, with their main competitor, Pulumi, getting away with it.
However, to play devil's advocate, the number of Terraform resources is a (slightly weak) predictor for resource consumption. Every resource necessitates API calls that consume compute resources. So, if you're offering a "cloud" service that executes Terraform, it's probably a decent way to scale costs appropriately.
I hate it, though. It's user-hostile and forces people to adopt anti-patterns to limit costs.
> Every resource necessitates API calls that consume compute resources
In that world, I think it'd make more sense to charge per run-time second of performing an operation. I understand the argument you are making but the issue is you get charged even if you never touch that resource again via an operation.
It might make sense if TFC did something, anything, with those resources between operations to like...manage them. But...
> However, to play devil's advocate, the number of Terraform resources is a (slightly weak) predictor for resource consumption. Every resource necessitates API calls that consume compute resources. So, if you're offering a "cloud" service that executes Terraform, it's probably a decent way to scale costs appropriately.
That would make sense if you paid per API call to any of the cloud providers.
What happens when you run `terraform apply`? Arguably, a lot of things, but at its core it:
- Computes a list of resources and their expected state (where computation is generally proportional to the number of resources).
- Synchronizes the remote state by looking up each of these resources (where network ingress/egress is proportional to the number of resources).
- Compares the expected state to the remote state (again, where computation is generally proportional to the number of resources).
- Executes API calls to make the remote state match the expected state (again, where network ingress/egress is proportional to the number of resources).
- Stores the new state (where space is most certainly proportional to the number of resources)
This is a bit simplified, but my point is that in each of the five operations, the number of resources can be used as a predictor for the consumed compute resources (network/cpu/memory/disk). A customer with 10k resources is necessarily going to consume more compute resources than one with 10 resources.
you can probably get a sense of it based on your own usage of terraform and the log output (or the time various resources take to get managed in the Terraform Cloud/Enterprise UI). I think in the majority of cases you'll see that the bulk of the compute time is actually network bound, not because of the number of resources, just because the server at the other end (AWS, Azure, GCP, etc.) is doing a lot of work. I know in some cases things like SQL Server Clusters on Azure can take literally hours to provision. Terraform will spend that "compute" time sitting there waiting, it's not actually doing much resource intensive though.
And then at the end as you said "stores the new state". Which is basically a big JSON file. 10 resources? 1M resources? I'll leave you to work out how much it probably costs to save a JSON file of that size somewhere like S3 ;)
Yeah, I'm not putting it forward as a justifying argument (just playing devil's advocate). However, it's probably how they justify it to themselves :) What makes it extra absurd is the price they charge per resource. That's where it turns into robbery.
> forces people to adopt anti-patterns to limit costs
The previous pricing model, per workspace, did the same. Pricing models are often based on "value received", and therefore often can be worked around with anti-patterns (e.g. you pay for Microsoft 365 per user, so you can have users share the same account to lower costs).
The previous "per apply" based model penalized early stage companies when your infrastructure is rapidly evolving, and discouraged splitting state into smaller workspaces/making smaller iterative changes.
Charging by RUM more closely aligns the pricing to the scale/complexity of the infrastructure being managed which makes more sense to me.
That said it has tempted me to move management of more resources into kubernetes (via cross plane/config connector)
Depends how you define insider. Employees were subject to a 6 month lockup and during that time the price dropped dramatically, but they still had to pay taxes on the $80 IPO price. Execs and institutional investors that were able to sell at IPO made out quite well though.
It's really not. Let me know if you find any lenders that will let me pay off a mortgage with a capital loss.
At least in the startup narrative that circulates on HN, most early employees at a company with that kind of IPO would hope to have a lottery like level of financial windfall. Now their upside is if they manage to get luck a second time they get to offset their winnings? :/
Where you paying Terraform for anything at the time?
My doubt in the value of the company was that I've been using Terraform for years in Enterprise settings and never needed to pay the company for anything.
Eh. Lots of retail investors do well with the right stock. Lot's of Apple investors have done well over the years. Microsoft even with the right timing.
They didn't with HashiCorp certainly. Bought some but not too much and were part of a housecleaning a few years back (which I'm glad I did).
Broadcom VMware play. If you’re invested as an enterprise in the ecosystem, is going to be a while before you can extricate yourself. In the meantime, you must pay up.
I'm pretty good at engineering fast moves. I took a company off of Salesforce in 45 days. VMware servers are even easier to changeout. Never done Terraform though.
terraform is OSS, unless you're using the hosted HCP version (workspaces? I think they're called), which, I've been using terraform heavily and at scale since v0.7 and I have never once thought I needed or would pay for something like that.
I'm well aware and have contributed to OpenTofu project in a small manner. I hope you'll forgive me slightly misspeaking - any version of terraform before that license change is OSS. It is, however, perfectly free to use and most companies I've worked with are hard-pinned on a particular version of terraform and rarely on the bleeding edge.
"HashiCorp's capabilities drive significant synergies across multiple strategic growth areas for IBM, including Red Hat, watsonx, data security, IT automation and Consulting"
Years before '93-'96 when I worked at Kaleida [1], a joint venture of Apple and IBM, alongside Taligent [2] their AIM Alliance [3] sister company, I laughed at the old joke:
Q: What do you get when you cross Apple and IBM?
A: IBM.
But then the joke was on me when I finally worked for a company owned by Apple and IBM at the same time, and experienced it first hand!
I gave Lou Gerstner a DreamScape [4] demo involving an animated disembodied spinning bouncing eyeball, who commented "That's a bit too right-brained for me." I replied "Oh no, I should have used the other eyeball!"
Later when Sun was shopping itself around, there were rumors that IBM might buy it, so the joke would still apply to them, but it would have been a more dignified death than Oracle ending up lawnmowering [5] Sun, sigh.
Now that Apple's 15 times bigger than IBM, I bet the joke still applies, giving Apple a great reason NOT to merge with IBM.
That said, I think a playbook in HCL would be worlds better than the absolutely staggering amount of nonsense needed to quote Jinja2 out of yaml
I would also accept them just moving to the GitHub Actions style of ${{ or ${% which would for sure be less disruptive, and (AIUI) could be even opt-in by just promoting the `#jinja2:variable_start_string:'${{', variable_end_string:'}}'` up into playbook files, not just in .j2 files
Turns out, nobody's quite figured out how to successfully charge for free shit, but it's moot when you can just burn venture capital for ten years until you get acquired and chopped for parts.
One of my friends was in management at HashiCorp and what he told he was there were a series of bad internal promotions to product management and heads of development that tanked the company. At the same time there was a huge problem with leftist activist employees taking the company for hostage, not surprised they got scooped up for pennies on the dollar.
These clowns want $2500 goddamned american dollars for the privilege of reading their bloviations on this topic, which i absolutely will not pay.
You know it's bad when the only people making money on this crap are management consultants.
Thinking back to 2014 using vagrant to develop services locally on my laptop I never would have imagined them getting swallowed up by big blue as some bizarre "AI" play. Shit is getting real weird around here.
> These clowns want $2500 goddamned american dollars for the privilege of reading their bloviations on this topic, which i absolutely will not pay.
You aren’t the target market for their “bloviations” - they are targeted at executives, and it isn’t like the executive pays this out of their own pocket, there is a budget and it comes out of the budget. Plus these reports generally aren’t aimed at technical people with significant pre-existing understanding of the field, their audience is more “I’m expected to make decisions about this topic but they didn’t cover it in my MBA”, or even “I need some convincing-sounding talking points to put in my slides for the board meeting, and if I cite an outside analyst they can’t argue with that”
Commonly with these reports a company buys a copy and then it can be freely shared within the company. Also $2,500 is likely just the list price and if you are a regular customer you’ll get a discount, or even find you’ve already paid for this report as part of some kind of subscription
Anyone prioritizing this nerfed, mindless dogshit over what their team is telling them and what's going on in the world around them is both an incompetent leader and a total idiot
A lot of the people paying for these analyst firm reports are sales people-so they can pass them on to their customers/prospects (to legally do that you often have to pay extra for “redistribution rights”)… and then the customer/prospect gets to read it for free
Who might not have much of an engineering team, or not one with relevant expertise… and why should they trust the vendor’s engineering team? If they are about to sign a contract for $$$, being able to find support for it in an independent analyst report can psychologically help a lot in the sales cycle
While the most useful reports for sales are those which directly compare products, like Gartner Magic Quadrant or Forrester Wave - a powerful tool if you come out on top - these kind of more background reports can help if the sales challenge is less “whose product should I buy?” and more “do I even need one of these products? should we be investing money in this?”
It has never paid my bills, in that I've never worked for an analyst firm.
My bills have been paid by working for vendors, where I have seen how sales and marketing use their reports in action. I have seen the amount of effort engineering and product management put in to try to present the best possible vision of their product and its future potential to these analysts. (I've never been personally directly involved in any of those activities though, I've just observed them from the margins.)
But, it isn't like the vendors have a huge amount of choice – if you refuse to engage with the analysts and utilise their reports in your sales cycle, what happens when your competitors do?
This sort of thing is why nobody gives a shit about IBM anymore and they have to keep just buying relevant companies to stay relevant.
Hopefully they do the right thing and hand hashicorp over to Redhat so they can open source the shit out of it. So they can do things like make OpenTofu the proper upstream for it, etc.
Did you intend to reference "It's a wonderful life?" When I read your comment I imagine a tiny child in Jimmy Stewart's arms, exclaiming the joys of capitalism ;-)
Who the heck is IDC's customer base, exactly? $2,500 for that, or $7,500 for this one about – drumroll, please – feature flags!
"Modern digital businesses need to be able to adapt to changing end-user demand, and since feature flags decouple release from deployment, it provides a good solution for improving software development velocity and business agility," said Jim Mercer, program vice president of IDC Software Development DevOps and DevSecOps. "Further, feature flags can help derisk releases, enable product experimentation, and allow for targeting and personalizing end-user experiences."
Something I always respected in Americans is their talent for making money from absolutely nothing, providing zero or negative value in the process of doing so. Obviously doesn't apply to everyone, but you have more than a fair share of these people.
Relatively few IDC clients are paying retail for single reports other than reprint rights. They're clients with broad employee access to events and reports in various areas. Had access for many years and, yes, having (supposedly validated) data is more or less essential for lots of presentations and other types of documents because, otherwise, your claims are viewed as pulling stuff out of you rear end.
> more or less essential for lots of presentations and other types of documents
Wait. What? This reminds me of the trope of the "wikipedia citation" in high school and college.. that move was worth at most a C+. Are you seriously saying these fucks actually seriously cite this bullshit? In this day and age where even crowdsourced wiki articles seem "credible"? What the actual fuck? I hate this shit.
Yeah but I could make this shit up in like.. 15-20min/mo vs working for a living like a normal human person. I'm just imagining the sheer number of vertical feet of skiing I'm missing out on and seeing red.
So, what is the practical TL;DR for everyone who isn't neither an employee nor investor? Hashicorp kinda made a lot of significant stuff, but that stuff is mostly FOSS and the commercial product is very niche. I am kinda surprised IBM even bought it, because it isn't very clear to me, how commercializeable this stuff is. So what does it mean? Will IBM most likely kill some FOSS products? Is this even possible? Were, say, terraform or nomad developed mostly by internal devs, or is there a solid enough community already to keep up with development or simply fork the tool if things go south?
The consolidation of power and IP into just several tech companies is worrying to me. Having the misfortune of working at IBM for just a few months, IBM leadership will give it the RedHat treatment. The dinosaurs at IBM will shelve their IP, and sell it for parts. Maybe Bloodmoar will buy up the rest and squeeze whatever remaining profit from acquisition.
If given the chance, just take the exit rather than trying to integrate into IBM.
There have been larger changes in areas that the SEC could point their fingers at, to make things more uniform between IBM and Red Hat. Sales also had some changes on both sides.
For engineering almost no difference other than switching to Slack.
Nomad is way easier to self-manage than K8s, but GCP does that for me, with all the compliance boxes checked, for extremely cheap. Every cloud provider is in that boat. Nomad will be more work and more money, be it compute or enterprise fees. I'm sticking with k8s.
I agree up to a certain scale. I've managed a large Nomad/Consul setup (multiple clusters, geographically separated), and it was nothing but a nightmare. I believe fly.io had a similar experience.
20k+ nodes and 200k+ allocs. To be fair, Kubernetes cannot support this large of a cluster.
Most of my issues with it aren't related to the scale though. I wasn't involved in the operations of the cluster (though I did hear many "fun" stories from that team), I was just a user of Nomad trying to run a few thousand stateful allocs. Without custom resources and custom controllers, managing stateful services was a pain in the ass. Critical bugs would also often take years to get fixed. I had lots of fun getting paged in the middle of the night because 2 allocs would suddenly decide they now have the same index (https://github.com/hashicorp/nomad/issues/10727)
Definitely not better than Kubernetes, but I don't regret working on it and I like it as a simpler alternative to Kubernetes. I remember trying to hire people for it and not a single person ever even heard of it.
> I remember trying to hire people for it and not a single person ever even heard of it.
I know, it's really sad. Kubernetes won because of mindshare and hype and 500,000 CNCF consulting firms selling their own rubbish to "finally make k8s easy to use".
I joined HashiCorp in 2016 to work on Nomad and have been on the product ever since. Definitely a lot of feelings today. When I joined HashiCorp was maybe 50 people. Armon Dadgar personally onboarded us one at a time, and showed me how to use the coffee maker (remember to wash your own dishes!). There have been a lot of ups (IPO) and downs (BUSL), but the Nomad team and users have been the best I've ever gotten to work with.
I've only ever worked at startups before, but HashiCorp itself left that category when it IPO'd. Each phase is definitely different, but then again I don't want go back to roadmapping on a ridiculously small whiteboard in a terrible sub-leased office and building release binaries on my laptop. That was fun once, but I'm ready for a new phase in my own life. I've heard the horror stories of being acquired by IBM, but I've also heard from people who have reveled in the resources and opportunities. I'm hoping for the best for Nomad, our users, and our team. I'd like to think there's room in the world for multiple schedulers, and if not, it won't be for lack of trying.
I've had the incredible displeasure of having to maintain multiple massive legacy COTS systems that were once designed by promising startups and ultimately got bought by IBM. IBM turned every last one into the shittiest enterprise software trash you can imagine.
Every IBM product I've ever used is universally reviled by every person I've met who also had to use it, without exaggeration in the slightest. If anything, I'm understating it: I make a significant premium on my salary because I'm one of the few people willing to put up with it.
My only expectation here is that I'll finally start weaning myself off terraform, I guess.
> Every IBM product I've ever used is universally reviled by every person I've met who also had to use it
During my time at IBM and at other companies a decade ago, I can name examples of this:
* Lotus Notes instead of Microsoft Office.
* Lotus Sametime Connect instead of... well Microsoft's instant messengers suck (MSN, Lync, Skype, Teams)... maybe Slack is one of the few tolerable ones?
* Rational Team Concert instead of Git or even Subversion.
* Rational ClearCase instead of Git ( https://stackoverflow.com/questions/1074580/clearcase-advant... ).
* Using a green-screen terminal emulator on a Windows PC to connect to a mainframe to fill out weekly timesheets for payroll, instead of a web app or something.
I'll concede that I like the Eclipse IDE a lot for Java, which was originally developed at IBM. I don't think the IDE is good for other programming languages or non-programming things like team communication and task management.
The green screens tend to be much quicker and more responsive than the web frontends that are developed to replace them.
I've seen a lot of failed projects for data entry apps because the experienced workers tend to prefer the terminals over the web apps. Usually the requirement for the new frontend is driven by management rather than the workers.
Which is understandable to me as a programmer. If it's a task that I'm familiar with, I can often work much more quickly in a terminal than I can with a GUI. The assumption that this is different for non-programmers or that they are all scared of TUIs is often a mistaken assumption. The green screens also tend to have fantastic tab navigation and other keyboard navigation functionality that I almost never see in web apps (I'm not sure why as I'm not a front end developer, but maybe somebody else could explain that).
I'll defend green screens all day long. Lots of people like them and I like them.
Everything else you listed I would agree with you about being terrible and mostly hated though.
I second the TUI argument here.
Back in ... maybe 2005 or what, in our ~60 people family business, I had the pleasure to watch an accountant use our bespoke payroll system. That was a DOS-based app, running on an old Pentium 1 system.
She was absolutely flying through the TUI. F2, type some numbers, Enter, F5 and so on and so on, at an absolutely blistering speed. Data entry took single-digit seconds.
When that was changed to a web app a few years later, the same action took 30 seconds, maybe a minute.
Bonus: a few years later, after we had to close shop and I moved on, I was onboarding a new web dev. When I told him about some development-related scripts in our codebase, he refused to touch the CLI. Said that CLIs are way too complicated and obsolete, and expecting people to learn that is out of touch. And he mostly got away with that, and I had to work around it.
I keep thinking about that. A mere 10 years before, it was within the accepted norm for an accountant to drive a TUI. Inevitable, even. And now, I couldn't even get a "programmer" to execute some scripts. Unbelievable.
I was at a ticket window buying concert tickets a couple weeks ago and was surprised to see the worker using the Ticketmaster TUI / Mainframe interface. She flew through the screens. The same experience on the Ticketmaster website is awful.
Not just accountants. I remember watching fully “non-technical” insurance admin / customer service people play the green screen keyboard like they were concert pianists. People can cope with a lot when they have to.
There is a learning curve, but not coping. One of the crest things with terminal: with experience one can type ahead, even before the form fully opened one can type data, which is queued in the input buffer and work efficiently. In a modern GUI application a lot of time is wasted with reaching for the mouse, aiming and waiting for the new form to render. That requires coping with it
I had to interact with a windows software which allows you to collect data with a digital form. We used it to digitize paper based survey by mapping free form question to a choices list.
The best oart was that it was entirely keyboard driven. If you can touch type, you can just read the paper and type away. The job was mind numbing, but the software itself was great.
In a native single-threaded UI, you can type ahead too. But it doesn't work on the web unless the page effectively reimplements an input queue.
Case in point: the aforementioned accountant obviously hated the new GUI-based app, exactly because of what you said. Aiming the mouse, looking for that button, etc. slows you down.
It doesn't have to. The tab order seems shortcuts are there and very usable... if anyone bothers to implement them.
Not only implement, but implement them consistently and making users aware.
Consistency is a thing. Old windows apps often followed a style guide to some degree, that was lost with web (while it's also hard, as styleguides differ between systems, like Windows and Mac) and wasn't ever as close as Mainframe terminal things where function keys had global effects.
Indeed. One of the things I keep having to tell younger people is: “webapps have no HIG!”
All of the major platforms have a HIG that tells developers how to maximize the experience for users. Webapps have dozens of ways to do things like “search”. Those who never developed for a platform with a HIG do not value it and keep reinventing everything.
I wouldn't say, cope, the green screen stuff has predictable field input, and predictable rules around selecting elements.
Despite its obvious downsides, for people who do regular form input and editing, it's often better than the flavor of the day web framework IMO
I mean, I wouldn't choose to use it, but I get it
Things have changed back though - the CLI is hot again at-least amongst developers.
I find it ironic that we developers prefer to use CLI because it's quick, efficient, stable, etc., but what we then deliver to people as web apps is quite the opposite experience.
It's what the default is. TUIs default to fast, stable, high-information-density, so you have to real work to make them otherwise. And I say this next part as primarily a front-end developer the past few years: web apps default to slow, brittle, too-much-whitespace "make the logo bigger" cruft, and it takes real work to make them otherwise.
At the end of the day most people are lazy and most things, including (especially?) things done for work, are low quality. So you end up with the default more often than not.
in my experience, many managers tend to try to dumb products down as much as possible, to make it work for the most people. the problem is that this, together with the usual bad ui/ux, makes the product inefficient to use, especially for power users.
then, every couple of years, a startup tries to carve out a niche by making a product that caters to power users and makes efficiency a priority. those power users adopt it and start to recommend it to other regular users. this usually also tends to work quite well because even regular users are smarter than expected, especially when motivated. thus the product grows, the startup grows and voila, a tech giant buys it.
now one of the tech giants managers gets the task to improve profits and figures out, the way to do this is to increase the user base by making the product easier to use. UX enshittification ensues, the power users start looking out for the next niche product and the cycle starts anew.
rule of thumb: if the manager says "my grandma who never used a computer before in her life must be able to use it", abandon ship.
An application I used to deal with was similar, but with a somewhat quirky developer, who would deliberately flip between positive/negative confirmation questions, e.g.:
- Confirm this is correct? (Yes=F1, No=F2) - Would you like to make any changes? (Yes=F1, No=F2)
And maybe sometimes flip the yes/no F-key assignments as well.
In theory this was done to force users to read the question and pay attention to what they were doing, in practice, users just memorized the key sequences.
Ah just randomly pick between F1 and F9 for the two questions and don't necessarily put them in order. Yes=F7, No=F3
/s
We had a tower of bable collapse, when we switched to web UI. We gained a million things and lost a million things. There was an era from around 1985 to early 2000s, where a large majority of applications had a (somewhat) consistent UI, based partially around MS-Windows, partially around some IBM 'common ui' design guide principles. The hall-marks of it was - keyboard navigation was possible - mostly consistent keyboard nav - common limited set of UI controls with consistent behaviour - for serious applications, there was some actual thought related to how the user was supposed to navigate through the system during operation (efficiency)
Post-web and post 9/11, where web browser UI has infested everything, we are now in a cambryan explosion of crayon-eating UI design.
It seems our priorities have been confused by important things like 'Hi George. I just noticed, that for the admin panels in our app, the background colours of various controls get the wrong shade of '#DEADBF' when loading on the newest version of Safari, can you figure out why that happens?'. 'Oh, and the new framework for making smushed shadows on drop-downs seems to have increased our app's startup time on page transitions from 3.7 seconds to 9.2 seconds, is there any way we can alleviate that, maybe by installing some more middleware and a new js framework npm module? I heard vite should be really good, if you can get rid of those parts where we rely on webpack?'
These days most web apps aren’t written to take advantage of the browser’s built-in tab navigation, and unless the dev is a keyboard user, they don’t even think to add it. This is largely the fault of React reinventing everything browsers already have built in, and treating accessibility as an afterthought. Bare metal web apps written in straight-up HTML do have decent tab navigation. They’re still not as snappy as a green terminal app, though. My first summer temp jobs during college were data entry, in the era when you might get a terminal app or a web app, and the old apps invariably had better UX.
>The green screens tend to be much quicker and more responsive than the web frontends that are developed to replace them.
Agree! Back in 2005, I was involved in a project to build a web front end as a replacement for the 'green screen' IBM terminal UI connecting to AS400 (IIRC). All users hated the web frontend with passion, and to this day, I do not see web tech that could compete in terms of data entry speed, responsiveness, and productivity. I still think about this a lot when building stuff these days. I'm hoping one day I'll find an excuse to try textualize.io or something like this for the next project :)
This only matters if "quick and more responsive" is the only thing that matters. Yes of course you can enter payroll timesheets on a TUI if you spend days/weeks/months gaining that muscle memory. The same way you can edit in vim much faster than vscode or Eclipse if you spend weeks/months/years gaining that muscle memory.
The fact that someone who has been doing it for years can do it faster is obvious, and pretty irrelevant.
Take someone who has never used either, and they'll enter data on the web app much faster.
You don't see keyboard nav in most web apps for similar reasons. First-time users won't know about it, there's no standard beyond what's built-in the browser (tab to next input, that kind of thing), and 90% of your users will never sit through a tutorial or onboarding flow, or read the documentation.
IBM eventually figured out that these products were terrible too, even if they saved money on paper; sold the Rational/Lotus/Sametime teams to an Indian competitor, and discontinued usage internally (I think, it's a big company).
There are people even today who want Lotus Notes back, still mourn its loss.
huh isn't it funny when you dogfood but instead of food it's... nvm
But yeah some elements of that list have convinced me to steer very clear from any products from that company
I remember using Rational Clear case at my first job. Yeah, in that case count me in on the list of people that revile the IBM products they've had to use.
> I don't think the IDE is good for other programming languages or non-programming things like team communication and task management.
It works great for Python and C++, honestly. If you're a solo dev, Mylyn does a great job of syncing with your in-code todo list and issue tracker, but it's not as smooth as the IDE side.
However, its Git implementation is something else. It makes Git understandable and allows this knowledge to bleed back to git CLI. This is why I'm using it for 20+ years now.
Eclipse was nice but WebSphere Application Developer was pretty horrible - I'm not sure how they achieved that! (WSAD was/was built on Eclipse)
If you used SameTime with Pidgin, SameTime didn't suck. But maybe that's because Pidgin is awesome, and not because of SameTime.
Yeah I was just about to say this -- I used Sametime via Pidgin (I think it may still have been called Gaim back then) on my work Linux machine and it was actually quite nice.
My favourite Sametime feature within Pidgin was, well, tabs (I can't remember if the Windows client had tabs as well..?), which was revolutionary for an IM client in 2005.
But my secret actual favourite feature was the setting which automatically opened an IM window /tab when the other person merely clicked on your name on their side (because the Sametime protocol immediately establishes a socket connection), so you could freak them out by saying hello even before they'd sent their initial message.
And what was that thing they used for email?
You mean Lotus Notes?
Can we add DOORS to this list please?
I have no idea how/why IBM of all places developed or sold this software but it badly needs to die in a fire.
Database technology which would seem outdated in 1994 with a UI and admin management tools to match.
DOORS is/was a requirement management tool and frankly speaking was crap but I have never seen another software as good and comprehensive in requirement management.
I expect it to be still used in aviation or army related domain, maybe pharma.
I think this is an interesting graph comparing web searches for "terraform alternative" and "opentofu". Notice the spike when the IBM rumors began, and the current spike now that the acquisition is complete?
https://trends.google.com/trends/explore?date=all&q=terrafor...
Both of those are still a rounding error compared to searches for Terraform though:
https://trends.google.com/trends/explore?date=all&q=terrafor...
That being said, it'll be interesting to see if it's still a rounding error 2 years from now.
How is Red Hat going after the acquisition by IBM? From my view, it is going well. The enterprise product (RHEL) is still excellent.
Dropping CentOS was a terrible decision. I’m not sure if that happened before or after the acquisition though.
It mostly happened afterwards but it was not driven by IBM.
Centos stream still exists and it is in fact the actual upstream of rhel.
CentOS was the downstream of RHEL, and much more people used it than RedHat/IBM knew or wanted to admit. I can argue that at least 90% of their users (by the number of installs) didn't even need any help to configure/troubleshoot that either.
But with a very IBM move and with some tunnel vision, they got triggered by the few people who abuse RedHat license model and rugpulled everyone. More importantly universities, HPC/Research centers and other (mostly research) datacenters which were able to sew their own garments without effort.
Now we have Alma, which is a clone of CentOS stream, and Rocky which tries to be bug to bug compatible with RHEL. It's not a nice state.
They damaged their reputation, goodwill and most importantly the ecosystem severely just to earn some more monies, because number and monies matter more than everything else for IBM.
Remember. When you combine any company with IBM, you get IBM.
> they got triggered by the few people who abuse Red Hat license model and rugpulled everyone
Alma is not a clone of CentOS Stream. You can use Alma just like you were using CentOS. It's really no different than before except for who's doing the work.
I agree that communication was bad. But why do you believe that Red Hat isn't able to screw up on their own?
> Alma is not a clone of CentOS Stream.
I'll kindly disagree on this with you. Reading the blog post titled "The Future of AlmaLinux is Bright", located at [0]:
> After much discussion, the AlmaLinux OS Foundation board today has decided to drop the aim to be 1:1 with RHEL. AlmaLinux OS will instead aim to be binary compatible with RHEL.
> The most remarkable potential impact of the change is that we will no longer be held to the line of “bug-for-bug compatibility” with Red Hat, and that means that we can now accept bug fixes outside of Red Hat’s release cycle.
> We will also start asking anyone who reports bugs in AlmaLinux OS to attempt to test and replicate the problem in CentOS Stream as well, so we can focus our energy on correcting it in the right place.
So, it's just an ABI compatible derivative distro now. Not Bug to Bug compatible like old CentOS and current RockyLinux.
TL;DR: Alma Linux is not a RHEL clone. It's a derivative, mostly pulling from CentOS Stream.
> I agree that communication was bad. But why do you believe that Red Hat isn't able to screw up on their own?
Absorption and "Rebranding and Repositioning" of CentOS both done after IBM acquisition. RedHat is not a company anymore. It's a department under IBM.
Make no mistake. No hard feelings towards IBM and RedHat here. They are corporations. I'm angry to be rug-pulled because we have been affected directly.
Lastly, in the words of Bryan Cantrill:
> You don't anthropomorphize your lawnmower, the lawnmower just mows the lawn, you stick your hand in there and it'll chop it off, the end.
[0]: https://almalinux.org/blog/future-of-almalinux/
> Absorption and "Rebranding and Repositioning" of CentOS both done after IBM acquisition. RedHat is not a company anymore. It's a department under IBM.
You're wrong. CentOS Stream was announced September/October 2019, too close to the IBM announcement to be an IBM decision; it had been in the works for quite some time before, and in fact this all started in 2014 when Red Hat acquihired CentOS.
From 2014 to ~2020 you were under the impression that nothing had changed, but Red Hat had never cared about CentOS-the-free-RHEL. All that Red Hat cared about was CentOS as the basis for developing their other products (e.g. OpenStack and OpenShift), and when Red Hat came up with CentOS Stream as a better way to do that, Red Hat did not need CentOS Linux anymore.
Anyhow, I've been through that and other stuff as an employee, and I'm pretty sure Red Hat is more than able to occasionally fuck up on its own, without any need for interference from IBM.
Bug for bug is a sham and always was. It's a disservice to users to only clone something.
Underneath it all, compatibility is what matters. At AlmaLinux we still target RHEL minor versions and will continue to do so. We're a clone in the sense of full compatibility but a derivative in the sense that we can do some extra things now. This is far, far better for users and also let's us actually contribute upstream and have more of a mutually beneficial relationship with RH versus just taking.
I'll say it depends.
Sometimes the hardware or the software you run requires exact versions of the packages with some specific behavior to work correctly. These include drivers' parts on both kernel and userland, some specific application which requires a very specific version of a library, so on and so forth.
I for one, can use Alma for 99% of the time instead of the old CentOS, but it's not always possible, if you're running cutting edge datacenter hardware. And when you run that hardware as a research center, this small distinction cuts a lot deeper.
Otherwise, taking the LEAPP and migrating to Alma or Rocky for that matter is a no-brainer for an experienced groups of admins. But, when computer says no, there's no arguing in that.
We don't change the expected versions. We might patch/backport more to them if there are issues, but the versions remain.
Basically the goal is still to fit the exact situation you just brought up. I'm not aware of this ever not being the case if it weren't to be the case for some reason, then we have a problem we need to fix.
All of the extra stuff we do, patch, etc. is with exactly what you just stated in mind.
I'll be installing a set of small servers in the near future. I'll be retrying Alma in a couple of them, to give it another chance.
As I said, in some cases Rocky is a better CentOS replacement than Alma is.
But to be crystal clear, I do not discount Alma as a distribution or belittle the effort behind it. Derivative, clone or from scratch, keeping a distro alive is a tremendous amount of work. I did it, and know it.
It's just me selecting the tools depending on a suitability score, and pragmatism. Not beef, not fanaticism, nothing in that vein.
Sustainability is one of the core reasons why we are not using RHEL SRPMs to build AlmaLinux. RH doesn't want us doing that, and doing so would be unsustainable and bring into question the future of AlmaLinux as it can, and likely will, turn into a game of cat/mouse getting those SRPMs :)
Let us know if you have any issues!
Red Hat bringing CentOS in-house (well before IBM entered the picture) was IMO one of the first in a string of expedient decisions that were... unfortunate. When I was at Red Hat I loudly argued against some of the ways things were handled but I also understand why various actions were taken when they were.
I'd also argue that CentOS classic was mostly bug for bug compatible but probably close enough for most. It shared sources but did use a different (complex) build system as I understand it.
That closeness allowed CentOS to be a drop-in replacement for RHEL for thousands of installations and exotic hardware combinations. Unfortunately, we don't have this capability anymore. Rocky bears most of that load now.
But CentOS Stream is not CentOS.
They are completely different products just reusing branding to confuse what people are asking for.
RHEL Developer is closer, as a no-support, no-cost version of RHEL, but you still have the deal with the licence song and dance.
CentOS gave folks a free version that let you run some dev environments that mostly mirrors prod, without worrying about licences or support. CentOS stream doesn't do this out of principle. It's upstream.
But for all practical purposes, that is dropping CentOS. They completely changed the identity of the product, so the fact it has the same branding isn't going to placate anyone.
so? That just means that it is not necessarily compatible with the current version of rhel deployed on our servers
It's going basically fine. If you're in engineering you would never notice the difference.
Companies are often brought in and told that nothing will change, and as long as they can pull their weight, this may be true. IBM seems a pretty diversified company, and there RedHat doing 5% of the total revenue may not be too bad. I don't know how well RedHat is doing commercially, but a few bad quarters could draw negative attention of the sort where upper-management wants to start messing with you, seek more synergy, efficiency, alignment. Being a much smaller small company within Verizon, having been left alone for a little while, we were then told that The Hug was coming. It did. We didn't grow to be their next billion dollar business unit (as no surprise to anyone in our little company), nor were we able to complement other products (ha! synergy!) and we were shuttered. At some point... engineering will notice.
RHEL has had no significant investment to keep it from becoming irrelevant in the next five years. The datacenter and deployments of linux have changed so rapidly (mostly due to the new centralization and homogeneity of infrastructure investment) that RHELs niche is rapidly shrinking.
This is clearly someone that is not paying attention to what Red Hat is doing.
RHEL is the enterprise gold standard.
Fedora is a lot of the pipeline for it, which itself has become an incredible server and desktop platform.
All the work with Open shift, backstage, podman / qubelet, etc.
They're going to be fine, from my graybeard position.
RHEL 10 beta has some interesting stuff in it. Running the OS itself as a container caught my eye.
Image mode RHEL is a pretty significant investment.
Apart from that, in terms of keeping RHEL relevant, most of the attention is on making it easier to operate fleets at scale rather than the OS itself. Red Hat Insights, Image Builder, services in general, etc.
Those are the key things that would keep it competitive against Ubuntu, Debian, Alma, Oracle etc.
If RHEL is becoming irrelevant, what distro will replace it for enterprise users?
We don’t run anything on bare metal anymore it’s all containers (90k employee very large enterprise).
Of course I can’t speak for all the teams, but all new projects are going out on kubernetes and we don’t care about rhel at all, typically it’s alpine it Debian base images
You have a hardware implementation of Docker?
Why leave terraform? You don’t feel OpenTofu will carry the torch well enough?
Podman is pretty good.
Isn’t MQ pretty good?
It’s heavy and old. We have to consume some but Kafka is nicer to work with typically (provided someone else is running it)
If Kafka is nicer to work with, then it must be horrible.
> Every IBM product I've ever used is universally reviled by every person I've met who also had to use it
Not a product, but a service: is Red Hat Linux a counter example?
Unfortunately IBM is going to ruin everything that was good about working for Hashicorp and eventually everything that was good about Hashicorp products.
I worked for a company acquired by IBM, and we held hope like you are doing, but it was only a matter of time before the benefit cuts, layoffs, and death of the pre-existing culture.
Your best bet is to quit right after the acquisition and hope they give you a big retention package to stay. These things are pretty common to ease acquisition transitions and the packages can be massive, easily six figures. Then when the package pays out you can leave for good.
Red Hatter here.
none of that has happened for us at Red Hat. Other than the one round of layoffs which occurred at the time that basically every tech company everywhere was doing much larger layoffs, that was pretty much it and there's no reason to think our layoffs wouldn't have been much greater at that time if we were not under the IBM umbrella.
Besides that, I dont even remember when we were acquired, absolutely nothing has changed for us in engineering; we have the same co-workers, still using all Red Hat email / intranets / IT, etc., there's still a healthy promotions pipeline, all of that. I dont even know anyone from the IBM side. We had heard all the horror stories of other companies IBM acquired but for whatever reason, it's not been that way at all for us at least in the engineering group.
Former Hatter here (Solution Architect Q2 '21 -> Q4 '22). Other than the discussions that took place around moving the storage/business products and teams under IBM (and the recently announcement transfer of middleware), I wouldn't have expected engineering to do that much interfacing with IBM. At most, division leadership maybe (this is just personal speculation). Finance and Sales on the other hand... quite a bit more.
We had a really fun time where the classic s-word was thrown around... "s y n e r g y". Some of the folks I got to meet across the aisle had a pretty strong pre-2010 mindset. Even around opinions of the acquisition, thinking it was just another case of SOP for the business and we'd be fully integrated Soon™.
They key thing people need to remember about the Red Hat acquisition is that it was purely for expertise and personnel. Red Hat has no (or very little) IP. It's not like IBM was snatching them up to take advantage of patents or whatnot. It's in their best interest to do as little as possible to poke the bear that is RH engineering because if there was ever a large scale exodus, IBM would be holding the worlds largest $34B sack of excrement we've seen. All of the value in the acquisition is the engineering talent and customer relationships Red Hat has, not the products themselves. The power of open source development!
It's heartening to hear that your experience in engineering has been positive (or neutral?) so far. Sales saw some massive churn because that's an area IBM did have a heavier impact in. There were some fairly ridiculous expectations set for year-over-year, completely dismissing previous results and obvious upcoming trends. Lost a lot of good reps over that...
Red Hatter since 2016, first in Consulting, now in Sales.
Oh the “synergy” rocket chat channel we had back then…
Things have been changing, for sure. So has the industry. So have our customers. By and large, Red Hatters on the ground have fought hard to preserve the culture. I have many friends across Red Hat, many that transitioned to IBM (Storage, some Middleware). Folks still love being a part of Red Hat.
On the topic of ridiculous expectations…there’s some. But Red Hatters generally figure out how to do ridiculous things like run the internet on open source software.
Every time you say rocket chat, I have to appear.
FWIW, the change at Red Hat has always been hard to separate between the forces of IBM and the reality of changing leadership. In a lot of ways those are intertwined because some of the new leadership came from IBM. Whatever change there was happened relatively gradually over many years.
Paul Cormier was a very different type of CEO than Jim Whitehurst for sure. But that's not an IBM thing, he was with Red Hat for 20 years previously.
I agree with you FWIW. The company also basically doubled in size from 2019 to 2023. It's very hard to grow like that and experience zero changes. And COVID happened shortly after so that also throws a wrench into the comparisons.
The point is, it's hard to point to any particular decisions or changes I disliked and say "IBM did that"
I do miss having Jim Whitehurst around. Jim spent 90 minutes on the Wednesday afternoon of my New Hire Orientation week with my cohort helping to make sure all of us could login to email and chat, answering questions, telling a couple short stories. He literally helped build the Red Hat culture starting at New Hire. Kind of magical when the company is an 11K person global business and doing 5B in revenue.
Cormier and Hicks have their strengths. Hicks in particular seems to care about cultural shifts and also seems adept at identifying key times and places to invest in engineering efforts.
The folks we have imported from IBM are hiring folks that are attempting to make Red Hat more aggressive, efficient, innovative. Some bets are paying off. More are to be decided soon. These kinds of bets and changes haven’t been for everyone.
>The company also basically doubled in size from 2019 to 2023. It's very hard to grow like that and experience zero changes.
Longtime Red Hatter here. Most of any challenges I see at Red Hat around culture I attribute to this rapid growth. In some ways it's surprising how well so many relatively new hires seem to internalize the company's traditional values.
Yeah, when I left I think there were something like 7x the number of people than when I joined. You can't run those two companies the same way no matter who is in charge.
cpitman!!!! Miss you a ton!!!
We use Nomad where I work and we LOVE it. Previous to Nomad we used K8s for several years which, at that point, allowed us to become cloud agnostic. With the move to Nomad about 3+ years ago, we were able to transition away from cloud and back to leased, bare metal machines. During our time with K8s, it didn't have a good bare-metal strategy with their ingress mechanism. In contrast, as we investigated Nomad, it was easy to deploy on pure metal without a hypervisor. The result of our migration to Nomad was having so many capable and far-less-expensive hosting options. Lastly, as part of our Nomad control plane, we also adopted Vault and Consul with great success.
I know there are horror stories around this acquisition and lots of predictions about what will happen, but only time will tell. On a minimum, it has been a delight to use the Hashicorp software stack along with the approach they brought to our engineering workflow (remember Vagrant?). These innovations and approaches aren't going away.
I would GTFO, IBM ain't your friend and ain't your savior and are unlikely to invest and the worse may come with increasing IBM management sticking their fingers in the pie. The folks who did well out of this already know, they have the checks to cash if that was your take away congratulations. Otherwise find another opportunity. If nothing else look around and find out what you are worth on the market and then have that hard discussion soon with HashiCorp/IBM.
I worked with a bunch of people who had worked at a startup that got bought by IBM. As the other commenters attested, they too experienced that IBM is not the kind of company that's going to turn on the investment taps.
There are worse companies to get bought by, but if you've only ever worked at startups then you're not likely to enjoy what this becomes.
I spoke with a guy (too long ago) that was a "genius architect" and worked for a company that was small enough, that he got to implement his castles in the air. Knowing him, they might have been quite good, but it was one person that knew the details and made changes at the architect scale. He had a quirky way of thinking.
When IBM acquired that company, after a few weeks, this guy had a meeting with new engineering people. The very first meeting, they changed things for him. Instead of a single winding road of development, they wrote out a large spreadsheet. The rows were the distinguishable parts of his large'ish and clever architecture; the columns were assignments. They essentially dismantled his world into parts, for a team to implement. He was distraught. He didn't think like that. They did not discuss it, it was the marching orders. He quit shortly afterwards, which might have been optimal for IBM.
If you are a good at your job and want to deliver fast then you need to adapt to changing circumstances and continue on. Nothing wrong if you can’t but I have learnt that’s how you play along to deliver your best continuously.
You mean perhaps if you define "your job" and "adapting to changing circumstances" strictly through a corporate vocabulary.
One could argue that to deliver his best continuously he adapted to changing circumstances and left.
Hey on a personal note, dealing with you and your team on Nomad's GitHub issue tracker was always a good experience. I hope nomad still has a future under IBM's roof.
Fairly large Nomad Enterprise user here, and I just want to say thanks for all of the work you and the team put in. I'm a big fan of Nomad and really appreciate the opportunities it has afforded me.
Regardless of the general sentiment, hoping for the best outcome for all of you.
I just wanted to say thank you for your work on Nomad. It's one of the most pleasant and useful pieces of software I have ever worked with. Nomad allowed us to build out a large fleet of servers with a small team while still enjoying the process.
Just wanted to say thanks for your work on Nomad. Amazing tool that had me rethink a lot of things about how i work with infra and software in general and is always pleasant to work with by itself.
What a blessing to see your comment today. It's been a while. I hope this works in your favor, whatever that means.
The Nomad team is a big reason why I am using it and I am sad to see you go.
No matter how long you worked at the acquiree and instrumental you were, be prepared for your opinions to be overridden by IBM lifers because you're not "true blue" (ie. directly to IBM). Also prepare for the bluewashing!
There are no resources and opportunities after being acquired by IBM. I worked for Red Hat when they were acquired. Our former CEO was quickly shown the door. We were making so much profit, almost a $1B in quarterly revenue. I left not long after the acquisition. Not long after I left, they laid off a bunch of staff.
No matter what they tell you, your day to day will not improve. For my area, it was mostly business as usual, but a net decrease in comp because IBM's ESPP is trash.
if you left right after the acquisition how can you even speak to what the experience has been?
It was within a year, not like the day after.
I have found the experience very different than what the OP's experience is.
As you know the layoffs that happened were around the same time as the rest of the industry layoffs were happening (fashion firing), I don't feel like it had a significant effect on the culture.
I am fully remote though, and have been for 15 years.
What part of my experience did you find different than your own? I said the day to day was mostly the same, minus the decrease in comp. I mostly was trying to articulate that the idea that IBM is going to 'super power' Hashicorp is not real, despite what IBM says.
A lot of what was communicated during the acquisition process was how IBM was going to super power Red Hat and help Red Hat grow into an even larger entity, and how Red Hat actually need IBM to survive.
Hashicorp's stuff always struck me as pretty hacky with awkward design decisions. For Terraform (at least a few years ago) a badly reviewed PR could cause catastrophic data loss because resources are deleted without requiring an explicit tombstone.
Then they did the license change, which didn't reflect well on them.
Now it's being sold to IBM, which is essentially a consulting company trying to pivot to mostly undifferentiated software offerings. So I guess Hashicorp is basically over.
I suspect the various forks will be used for a while.
> For Terraform (at least a few years ago) a badly reviewed PR could cause catastrophic data loss because resources are deleted without requiring an explicit tombstone.
There have been lifecycle rules in place for as long as I can remember to prevent stuff like this. I'm not sure this is a "problem" unique to terraform.
IIRC, the lifecycle hook only prevents destruction of the resource if it needs to be replaced (e.g. change an immutable field). If you outright delete the resource declaration in code then it’s destroyed. I may be misremembering though
The Google Cloud Terraform provider includes, on Cloud SQL instances, an argument "deletion_protection" that defaults to true. It will make the provider fail to apply any change that would destroy that instance without first applying a change to set that argument to false.
That's what I expected lifecycle.prevent_destroy to do when I first saw it, but indeed it does not.
This is not a terraform problem. This is your problem. Theoretically, you should be able to recreate the resource back with only a downtime or some services affected. You should centralize/separate state and have stronger protections for it.
I'm pretty sure you are. I've had it protect me from `terraform destroy`.
I think the previous post is saying a resource removed from a configuration file rather than an invocation explicitly deleting the resource in a command line. Of course if it’s removed from the config file, presumably the lifecycle configuration was as well!
Yeah, that's a legit challenge that it would be great if there was a better built-in solution for (I'm fairly sure you can protect against it with policy as code via Sentinel or OPA, but now you're having to maintain a list of protected resources too).
That said the failure mode is also a bit more than "a badly reviewed PR". It's:
* reviewing and approving a PR that is removing a resource * approving a run that explicitly states how many resources are going to be destroyed, and lists them * (or having your runs auto approve)
I've long theorised the actual problem here is that in 99% of cases everything is fine, and so people develop a form of review fatigue and muscle memory for approving things without actually reviewing them critically.
I find this statement to be technically correct, but practically untrue. Having worked in large terraform deployments using TFE, it's very easy for a resource to get deleted by mistake.
Terraform's provider model is fundamentally broken. You cannot spin up a k8s server and then subsequently use the k8s modules to configure the server in the same workspace. You need a different workspace to import the outputs. The net result was we had like 5 workspaces which really should have been one or two.
A seemingly inconsequential change in one of the predecessor workspaces could absolutely wreck the later resources in the latter workspaces.
It's very easy in such a scenario to trigger a delete and replace, and for larger changes, you have to inspect the plan very, very carefully. The other pain point was I found most of my colleagues going "IDK, this is what worked in non-prod" whilst plans were actively destroying and recreating things, as long as the plan looked like it would execute and create whatever little thing they were working on, the downstream consequences didn't matter (I realize this is not a shortcoming of the tool itself).
What happens if you forget the lifecycle annotations or put them in the wrong place or you accidentally delete them? Last time I checked it was data loss, but that was a few years ago.
The same as in any other language when what you wrote was not what you intended? Sorry, I’m really confused what your complaint here is or how you’d prefer it to work. If you make a sensitive resource managed by any kind of IAC, of course the IAC can destroy it in a manner that would result in irretrievable data loss. The language has for forever put semantics in place to prevent that, and I’m not sure as a power user I’d want it any other way, I’m explicit with what I want it to do and dont want it making crazy assumptions that I didnt write.
like, what happens if you forget to free a pointer in c? sorry for snark but there are an unbelievably numerous amount of things to complain about in tf, never heard this one.
> what happens if you forget to free a pointer in c?
Assuming you mean 'forget' to free malloc'd space referenced by at least one pointer, that's an easy one .. it's reclaimed by the OS when the process ends.
Whether that's a bad thing or not really depends on context - there are entire suites of interlocked processing pipelines built about the notion of allocating required resources, throughputing data, and terminating on completion - no free()'s
surely my salient point is recognized regardless of semantics, but thanks for the correction. To use another example in another post - what happens if you DROP TABLE in sql?
DROP TABLE is explicit. Inadvertently removing a line from a config file and having Postgres decide to automatically "clean up" that "unneeded table" would be a more apt analogy.
What happens is that you call Iron Mountain and find out that those tapes don't actually have anything useful on them.
[dead]
I mean its also data loss if you run DROP DATABASE when you shouldn't. thats not sqls fault
I think in this context it's that your database server is lost if you accidentally forget to write KEEP DATABASE.
"What happens if I turn a table saw on and start breakdancing on it?"
Of course you're going to hurt yourself. If you didn't put lifecycle blocks on your production resources, you weren't organizationally mature enough to be using Terraform in production. Take an associate Terraform course, this specific topic is covered in it.
I'm not familiar with every lifecycle argument but I don't know of any that prevent resources being destroyed if they are removed from the tf file (what the parent was talking about). prevent_destroy, per docs, only applies as long as the resource is defined.
I think the only way to avoid accidentally destroying a resource is to refer to it somewhere else, like in a depends_on array. At least that would block the plan.
>I don't know of any that prevent resources being destroyed if they are removed from the tf file (what the parent was talking about).
Azure Locks (which you can also manage with Terraform), Open Policy Agent, Sentinel rules, etc. will prevent a destroy even if you remove the definition from your Terraform codebase. Again, if you're not operationally mature enough, the problem isn't the tool, it's you.
"Operationally mature" is code here for "the gun starts out loaded and pointed at your foot". It's fine to point out that that's a suboptimal design for a tool.
>Operationally mature" is code here for "the gun starts out loaded and pointed at your foot"
No, it's code for "don't build a load bearing bridge if you don't understand structural engineering."
> It's fine to point out that that's a suboptimal design for a tool.
This isn't "suboptimal" though. If you delete a stored procedure in your RDBMS and it causes an outage, it's not because SQL/PostgreSQL is suboptimal. Similarly if you accidentally delete files from your file system, it's not because file systems are "suboptimal". It's because you weren't operationally mature enough to have proper testing and backups in place.
An easy way to get someone to admit that terraform is a hacky child’s language is to ask how to simply print out the values of variables and resources you are using in terraform easily. This basic programming language 101 functionality is not present in the language
Declarative language can absolutely print out what it does know at the time. Which of course won’t be everything. But if I’m taking an input and morphing it at runtime like looping or just moving the information around in a data structure which terraform absolutely allows you to do, the runtime of terraform has all that information. I just can’t get it out.
Plus, there are many times I don’t want to have to use the REPL. Maybe I’m in CI or something. The fact that I cannot iterate over values of locals and variables easy to see what they are in say, some nested list or object, easily and just print out the values as I’m going along for the things terraform does know is just crappy design
This works great for toy examples and fails the moment you have 1 (one) module
It doesn't fail, but you're right you can't reach inside submodules with terraform console; I wish you could.
HCL isn’t a programming language. This seems to be the main misconception about it and Terraform.
Any sufficiently large configuration language eventually becomes Turing complete (or close to it). See HCL, GitHub actions, kubernetes.
Kubernetes is not like the others in that list because it remains a declaration of intended state. There are for sure no "if", "loop", or even variables in the .yaml files. You may be thinking of the damn near infinite templating languages that generate said yaml, or even Kustomize that is JSONPatch-as-a-Service. GHA is not like the others because it is an imperative scripting language in yaml, not a "configuration language"
It’s so infuriatingly close though which is what makes it so fucking annoying to work with. It has loops, conditionals, variables…
Indeed, it's the worst case of uncanny valley syndrome!
I agree that terraform is hacky, but is "terraform output {variable name}" not how you would do that?
You have to be able to actually specify the output. And that does not handle all use cases. And it has requirements on how it can be run. And it takes the full lifecycle of the plan. And it won’t work in many circumstances without an apply.
So no. Terraform has the information internally in many cases. There’s just no easy way to print it out.
I concur. I looked pretty hard into adapting Serf as part of a custom service mesh and it had some bonkers designs such as a big "everything" interface used just to break a cyclic module dependency (perhaps between the CLI and the library? I don't recall exactly), as well as lots of stuff that only made sense if you wanted "something to run Consul on top of" rather than a carefully-designed tool of its own with limited but cohesive scope. It seemed like a lot of brittle "just-so" code, which to some extent is probably due to how Go discourages abstraction, but really rubbed me the wrong way.
My hot take is just that Vault isn't a good solution, and the permissions model is wholly inadequate.
Except for not "feeling" secure, the only thing everyone wants is a Windows AD file share with ACLs.
Just no one realises this: all the Vault on disk encryption and unsealing stuff is irrelevant - it's solving a problem handled at an entirely different level.
Sorry HashiCorp, been there and got the Tee-shirt (pink) :)
Actually for me, the company I was at that IBM purchased was on the verge of folding, so in that case, IBM saved our jobs and I was there for many years.
We experienced arbitrary layoffs in 2023, followed by an ominous feeling that more layoffs were imminent. However, the announcement of a deal changed the situation.
Now, we are actively hiring for numerous positions.
Personally, I am not planning to stay much longer. I had hoped that our corp structure would be similar to RedHat, but it seems that they intend to fully integrate us into the IBM mothership.
I really wanted to work at HashiCorp in 2017/2018 and did five interviews in one day only to get ghosted[1]. That experience soured me on HC and its tools but I still admired them from afar.
End of an era.
---
[1]: https://blog.webb.page/2018-01-11-why-the-job-search-sucks.t...
I used to work at HashiCorp, and was a hiring manager. I know there's reasons why candidates might get given vague answers on why we're not proceeding, but I'd have been horrified to learn someone we interviewed got ghosted. Someone who was so far into the process that they did five interviews?! Inexcusable.
I'm so sorry that happened to you :( I hope you found somewhere else that filled you with excitement.
Thank you.
Personal projects fill the void when $DAYJOB is lacking.
What do you expect is the reason this happens? I would suspect your skill assessment after a handful of interviews is sound and most people liked you. Do you think you just run into a person eventually that doesn't vibe?
I read this as the GHOSTING is the thing that bother them. After a full day of interviews, it sounds like. The failure to be hired doesn't sound like it bothers them to me.
Who knows? I wish the hiring team remembered that real-life people are looking for work because they have bills to pay and regular communication is necessary.
Two months ago a founder reached out to me, gave me a coding project, I completed it (and got paid!), spoke with his co-founder, and then...nothing. At least I got paid but man, YOU reached out to ME. I don't get it.
If the company gets 30 applicants, 10 of which go to the final round and 7 of those are really good, if they only have 5 openings then 2 really good applicants are not getting offers.
Those two should at least get told they didn't make it.
Oh sure, agreed. That's pretty poor.
That sucks and I apologize this happened to you.
It is what it is.
I ended up having to move out of my hometown (Boston) to stay with my wife's friend's family and now we live in CA. I have a delicious loquat tree in my backyard so things worked out, haha!
[dead]
Red Hat has been a very atypical approach. There has been some swapping of teams back and forth but, as far as I can tell (been out of it for a while), Red Hat is still quasi-independent. Still lots of changes (probably most notably because of a lot of growth) but strategic Red Hat areas still seem to be pretty independent.
Broadly independent but filled to the gills with folks who spent a decade or more at IBM before landing at Red Hat. While this has been true of rank and file for years, recently it’s true on the c-suite.
Was probably truer of middleware than other areas. (Which I gather is largely going over to IBM.) Linux had a very significant DEC legacy. OpenShift was essentially greenfield from a startup acquisition (that got totally rewritten for Kubernetes anyway) and I'm not sure I would characterize people in that area as broadly coming from any particular large vendor.
> Red Hat is still quasi-independent. Still lots of changes (probably most notably because of a lot of growth) but strategic Red Hat areas still seem to be pretty independent.
Still broadly correct.
Was HashiCorp ever profitable since its IPO? From here, it says no: https://stockanalysis.com/stocks/hcp/financials/
If never profitable (or terrible return on equity), why would you call the layoffs "arbitrary"? It seems pretty reasonable to me.
Why hire people in the first place if you aren't profitable? Seems pretty irresponsible to me. Or have the rules changed?
Yes the rules have changed, seems the idea is to get big fast increasing revenue without regard to profits, and eventually have a great IPO, then one of the following:
1. hope you can sucker someone into buying the company
2. keep the VC $ flowing and continue growing, then loop to # 1
3. worse case, need to start making a profit and hope you can survive until # 1. If #1 does not happen, pray(?).
During this time, the founders are pulling in a great salary.
As it happened with the other startups that were acquired by IBM, this too shall pass through the digestion system of the dinosaur and ejected out as a dump. Hashicorp products are showing the signs of a legacy thing already. IBM is the nursing home for these sort of aging stuff.
I'm a heavy user of Terraform and Vault products. Both do not belong to this era. Also worked for a startup acquired and dumped by IBM.
> I'm a heavy user of Terraform and Vault products. Both do not belong to this era.
So do you find Terraform and Vault good or bad? (sorry, not a native English speaker and I had problems to transcript the sentence)
What are the modern equivalents? For Terraform I'd imagine it's Pulumi or OpenTofu but what is it for Vault? Last I checked OpenBao didn't seem to have much juice but it's been a minute since I did so. Or are there unrelated projects in this space that are on the same trajectory as Hashicorp was a decade ago?
Crossplane for TF.
Secrets whatever your cloud provider has (Google secrets manager etc).
One of those places (like HP, Oracle and Broadcom, and also CA back in the day) where once good companies go to die.
Redhat has really delivered for IBM and IBM seems not to have messed it up too bad.
Some of this is obvious (linux and mainframes aren't a bad combo). Some of it I'm a bit surprised by (openshift revenue seems strong).
Probably already basically returned purchase price in revenue and much more than purchase price in market cap.
A noticeable thing is
https://www.redhat.com/en
Most of the these type plays the home page has stacked toolbars / marketing / popups / announcements from the parent company and their branding everywhere (IBM XXX powered by Redhat)... I see very little IBM logo or corporate pop-up policy jank on redhat.com.
Nice. When I opened their homepage, I could not find anything obvious that shows they are owned by IBM. Literally, I had to search the HTML source code to find the sequential characters "IBM"!
As a current Red Hat employee, I can say that they've treated us far better than the likes of Oracle or Broadcom would have.
Give it time.
It's been almost 6 years
IBM acquired SoftLayer in 2013 and the bluewashing didn't reach a fever pitch until 2019 or so. Also, the pandemic slowed things down at an already dinosauric company. IBM is over a hundred years old. I have faith that it will get around to entirely ruining Red Hat sooner than later.
Finally, a company to match the quality of Terraform.
IBM is pushing for restauring software patents in the US (and elsewhere), not a friend of software freedom.
People who stayed at IBM because they could not afford going anywhere else.
People who worked at companies acquired by IBM and could not afford going anywhere else.
A mixture of both will be involved from now on in decision making regarding your platform formation core products.
In some ways to me it feels like a turning point for the GFC/ZIRP thru COVID era of tech companies with no path to profit.
After the haze of the LLM bubble passes, I hope startups have an exit strategy other than "we'll just get 0.01% of users to pay 6+ figures for support" or "ads".
Good tech deserves a good business model such that it can endure for the long term.
Enjoy switching to Lotus Notes.
It's called HCL Notes now: https://en.wikipedia.org/wiki/HCL_Notes
And Hashicorp are experts in HCL so I am sure they will love it.
This is hilarious.
IBM switched off Notes to Microsoft 365, maybe two years ago or so.
I only correct you because it's an even bigger indictment of Notes that IBM switched off of it.
Been gone for a couple of years now. Outlook replaced it. Legacy Domino apps still around in various places though.
Condolences, Hashicorp folks. Been there.
I learned many things at HashiCorp but none as important as choosing ISOs over RSUs when given the chance. Thank you for the gains $HCP.
I met some great people along the way that I'm glad to have gotten the opportunity to work with. Godspeed all!
It's been good using hashicorp tools, now IBM is going to run them to the ground. I need to look into the fork of tf now.
All these redhatters talking like centos isnt dead. Like wtf, the Kool aid must taste good.
Investors at IPO lost quite a bit of money...
Yeap we did. I wrote it off around the time of the licence change, just after they decided to ditch the TF Team plan in favour of the utterly ridiculous “Resources Under Management” billing model.
I knew the company had lost the plot at that point.
Who's the target audience for this pricing that can afford this? The RUM pricing is indeed quite ridiculous.
It feels quite ridiculous, especially if you are managing "soft" resources like IAM roles via Terraform / Pulumi. At least with real resources (say, RDS instances), one can argue that Terraform / Pulumi pricing is a small percentage of the cloud bill. But IAM roles are not charged for in cloud, and there are so many of them (especially if you use IaaC to create very elaborate scheme).
Who's the target audience for this pricing that can afford this?
The kind of customers it is good to have.
Because filtering out price sensitive customers is a sound business strategy.
As a rule of thumb, solve any problem your customer might have. Except not having money.
There is an argument to be made that price-sensitive customers are a neglected market. Granted, marketing to them is very different - they're prone to being scooped if someone comes by willing to sell your same product to them at a loss (hi, Amazon and Walmart) - but there are a lot more of them and you're not fighting every startup on the planet for the same handful of clients.
Business have made a killing in China and India for a reason, after all.
+ There’s an argument against every rule of thumb.
+ For what it is worth, the just-one-percent-of-all-Chinese is historically a poor business strategy.
+ As you point out, targeting price sensitive customers puts you in competition with Walmart and Amazon. Not only that but you are competing for their worst customers.
you're not fighting every startup on the planet for the same handful of clients
Not having access to good clients/customers suggests the business idea might not be viable. Chasing money from people without the wherewithal or will to pay, does not make your business idea viable.
But again it is a rule of thumb.
That's fair. I wasn't coming for you and I'm certainly not trying to fight you from some kind of authority - I'm definitely not a businessperson.
The only point I was trying to get across is that even "bad" customers are still customers, and that there's still a lot of money to be made meeting people's needs doing the work others don't want to do. I feel like this applies from the bottom of the socioeconomic ladder all the way to the top - that's all. Perhaps I should've made that clearer, and that's on me.
An unsolicited side note: I think the bristling to this post was because of the language you were using. Talking about the poor as if they were to be discarded made you look a bit as if you have no empathy, which might not be fair to you. I get it - business require being hard-hearted if you want to get ahead because if you don't make tough decisions, someone else will - but it probably wasn't your best look, you know?
Talking about the poor as if they were to be discarded
The context was Hashicorp pricing for a web service, I was not talking about the poor.
Not being able to afford a B2B service is not an injustice.
there's still a lot of money to be made meeting people's needs doing the work others don't want to do. I feel like this applies from the bottom of the socioeconomic ladder
Are you betting your breakfast on walking your talking?
even "bad" customers are still customers
That’s why I don’t recommend going out to find them. They tax your ability to provide high quality. You will have enough problems without trying to get lava from a turnip.
it probably wasn't your best look, you know
For better or worse, it’s not going to keep me up grieving on long winter nights.
Good for who? Good for people getting bonuses? Good for executives?
It doesn't seem to be good for the customers or the people using the software or the people contributing to the open source code. It also doesn't seem to have been good for the investors, looking at the other comments.
Good for people who got in pre-IPO. Bad for people who got in post-IPO.
We are not talking about price sensitive customers though. Hashicorp shut out all customers who wanted a fixed price agreement with RUM.
It also creates horrible incentives. Oh I won't run this in isolated project or under a separate service account since that costs more, let's just pile everything together.
To be fair to RUM pricing, there are also horrible incentives for workflow invocation based pricing models, or workspace count models.
Pulumi’s RUM pricing is why I was very hesitant to even evaluate it as an alternative to just using terraform.
I’m finding that the basic backend functionality of Pulumi and Terraform managed cloud is fairly easy to build (especially Terraform, I can’t quite believe how absurdly simple their cloud is…)
Plus there are open source projects like Atlantis that fit the bill for many teams with regards to terraform automation.
Terrateam too[0]
Although Terrateam is more tightly integrated with a VCS provider.
Disclaimer: I co-founded Terrateam.
[0]https://github.com/terrateamio/terrateam
That must explain why they broke up the S3 resources into a bunch of tiny resources.
It was made apparent right after the IPO. Our team got a new VP in charge who changed the mantra from practitioner-first to enterprise-first. Soon after they then laid-off anyone not working on enterprise features. It was a sad death of a great company culture. Mitchell left around the same time which, IMO, speaks volumes.
The older I get, the more I'm convinced that practitioner-first is the only reasonable way to drive a product's features, while enterprise-first is the only reasonable way to drive a company's revenue.
Which is to say strong sustainable products need both.
... but ffs don't let the entire company use enterprise as a reason to ignore practitioner feature requests.
This is probably inaccurate, but it seemed like they wrote it off as a safe move, with their main competitor, Pulumi, getting away with it.
However, to play devil's advocate, the number of Terraform resources is a (slightly weak) predictor for resource consumption. Every resource necessitates API calls that consume compute resources. So, if you're offering a "cloud" service that executes Terraform, it's probably a decent way to scale costs appropriately.
I hate it, though. It's user-hostile and forces people to adopt anti-patterns to limit costs.
> Every resource necessitates API calls that consume compute resources
In that world, I think it'd make more sense to charge per run-time second of performing an operation. I understand the argument you are making but the issue is you get charged even if you never touch that resource again via an operation.
It might make sense if TFC did something, anything, with those resources between operations to like...manage them. But...
> However, to play devil's advocate, the number of Terraform resources is a (slightly weak) predictor for resource consumption. Every resource necessitates API calls that consume compute resources. So, if you're offering a "cloud" service that executes Terraform, it's probably a decent way to scale costs appropriately.
That would make sense if you paid per API call to any of the cloud providers.
What happens when you run `terraform apply`? Arguably, a lot of things, but at its core it:
- Computes a list of resources and their expected state (where computation is generally proportional to the number of resources).
- Synchronizes the remote state by looking up each of these resources (where network ingress/egress is proportional to the number of resources).
- Compares the expected state to the remote state (again, where computation is generally proportional to the number of resources).
- Executes API calls to make the remote state match the expected state (again, where network ingress/egress is proportional to the number of resources).
- Stores the new state (where space is most certainly proportional to the number of resources)
This is a bit simplified, but my point is that in each of the five operations, the number of resources can be used as a predictor for the consumed compute resources (network/cpu/memory/disk). A customer with 10k resources is necessarily going to consume more compute resources than one with 10 resources.
you can probably get a sense of it based on your own usage of terraform and the log output (or the time various resources take to get managed in the Terraform Cloud/Enterprise UI). I think in the majority of cases you'll see that the bulk of the compute time is actually network bound, not because of the number of resources, just because the server at the other end (AWS, Azure, GCP, etc.) is doing a lot of work. I know in some cases things like SQL Server Clusters on Azure can take literally hours to provision. Terraform will spend that "compute" time sitting there waiting, it's not actually doing much resource intensive though.
And then at the end as you said "stores the new state". Which is basically a big JSON file. 10 resources? 1M resources? I'll leave you to work out how much it probably costs to save a JSON file of that size somewhere like S3 ;)
In absolute terms, I agree with you. But in practical terms I would wager it is negligible.
Yeah, I'm not putting it forward as a justifying argument (just playing devil's advocate). However, it's probably how they justify it to themselves :) What makes it extra absurd is the price they charge per resource. That's where it turns into robbery.
Sure but it’s calling those service apis directly. The “hard work” is not being done by the client making those calls. Presumably.
> forces people to adopt anti-patterns to limit costs
The previous pricing model, per workspace, did the same. Pricing models are often based on "value received", and therefore often can be worked around with anti-patterns (e.g. you pay for Microsoft 365 per user, so you can have users share the same account to lower costs).
I actually prefer the RUM model.
The previous "per apply" based model penalized early stage companies when your infrastructure is rapidly evolving, and discouraged splitting state into smaller workspaces/making smaller iterative changes.
Charging by RUM more closely aligns the pricing to the scale/complexity of the infrastructure being managed which makes more sense to me.
That said it has tempted me to move management of more resources into kubernetes (via cross plane/config connector)
Pricing, and, generally speaking, customer engagement were not their strength
Had to check the numbers. IPO at $80 and sold to IBM for $35.
Not great for investors, but insiders benefitted a lot!
Depends how you define insider. Employees were subject to a 6 month lockup and during that time the price dropped dramatically, but they still had to pay taxes on the $80 IPO price. Execs and institutional investors that were able to sell at IPO made out quite well though.
Too bad you can't sell the option to buy your shares the day you get them at whatever price you want :/
Execs are employees, were they really exempt from the lockout? Seems unethical.
thats preferred stock baby. Startups still a scam to work for.
Nearly certainly they were not. Any stock holder would be blacked out including existing investors.
That’s simply not true. You can look at the SEC filings and see exactly who was able to liquidate during the IPO.
I’m not passing judgment as to whether that’s “good” or “bad.” It simply is.
There are always haves and have-nots, lol. “Tech” isn’t exempted.
> they still had to pay taxes on the $80 IPO price
They will get capital losses.
That's not perfect.
It's really not. Let me know if you find any lenders that will let me pay off a mortgage with a capital loss.
At least in the startup narrative that circulates on HN, most early employees at a company with that kind of IPO would hope to have a lottery like level of financial windfall. Now their upside is if they manage to get luck a second time they get to offset their winnings? :/
IPOs became exits in 2008. They’re no longer about raising capital
Google was profitable when it went public in 2004. But yeah, no longer about raising capital.
Worst investment I ever made. I like Mitchell and Armon so I emotionally just bought the IPO and closed my eyes presuming a TWLO or NET. Oh well!! :)
Yup that's me. Happy user of terraform at that time so bought their stock on the day of IPO.
Where you paying Terraform for anything at the time?
My doubt in the value of the company was that I've been using Terraform for years in Enterprise settings and never needed to pay the company for anything.
Totally worthless.
Running a few products. Quoted $1MM or so over 3 years for support. I was able to say no and saved six figures each month.
Yup. My worst investment. Never invest at IPO I guess.
If that's your worst investment, you're doing great. I bought stock in HashiCorp. I also bought stock in EBET, for a return of -100.00%.
It's been a zombie for a while...
*retail investors
Retail will always be holding the bag. This is known.
Eh. Lots of retail investors do well with the right stock. Lot's of Apple investors have done well over the years. Microsoft even with the right timing.
They didn't with HashiCorp certainly. Bought some but not too much and were part of a housecleaning a few years back (which I'm glad I did).
How far away from IPO are Apple and Microsoft now? I think parent is lamenting the state of IPOs as cashing out, if anything.
I'm surprised the deal was worth more than the market cap when they did the license rug pull.
Broadcom VMware play. If you’re invested as an enterprise in the ecosystem, is going to be a while before you can extricate yourself. In the meantime, you must pay up.
I'm pretty good at engineering fast moves. I took a company off of Salesforce in 45 days. VMware servers are even easier to changeout. Never done Terraform though.
This cowboy attitude doesn’t fly in regulated industries. Where VMware and co reign supreme
terraform is OSS, unless you're using the hosted HCP version (workspaces? I think they're called), which, I've been using terraform heavily and at scale since v0.7 and I have never once thought I needed or would pay for something like that.
Terraform is not OSS, it switched to a source-available license over a year ago.
OpenTofu[0] is the OSS fork though.
[0]: https://github.com/opentofu/opentofu
Disclaimer: involved with OpenTofu
I'm well aware and have contributed to OpenTofu project in a small manner. I hope you'll forgive me slightly misspeaking - any version of terraform before that license change is OSS. It is, however, perfectly free to use and most companies I've worked with are hard-pinned on a particular version of terraform and rarely on the bleeding edge.
No worries, that makes sense!
And thanks for contributing :)
Yep. You're thinking of TFE. Workspaces and Stacks are the the advertised method for building composable infra.
That's true, but it's easier to switch from Terraform to Pulumi than it is to move from VMware to some other virtualization platform.
secrets, too.
Org's already knee deep in vault / vault-agent for PKI and secrets wont be eager to switch.
But vault was already a pretty enterprise offering. I'd imagine most people running it had an existing contract.
[flagged]
"HashiCorp's capabilities drive significant synergies across multiple strategic growth areas for IBM, including Red Hat, watsonx, data security, IT automation and Consulting"
this sounds like corporate AI slop
Worst part is that a lot of that slop is actually still human-generated.
Do any Opentofu cloud providers (like spacelift) offer a managed migration tool to their cloud and opentofu from TFE or TFC?
(Asking for a friend).
Here you go[0]! Docs are linked in the README, and there’s also a recent blog post about it[1].
In any case, make sure to reach out via the website chat widget / email / demo form, we’re happy to help!
The migration from Terraform to OpenTofu is pretty seamless right now, and documented in the OpenTofu docs[2].
[0]: https://github.com/spacelift-io/spacelift-migration-kit
[1]: https://spacelift.io/blog/how-to-migrate-from-terraform-clou...
[2]: https://opentofu.org/docs/intro/migration/
Disclaimer: work at Spacelift
Years before '93-'96 when I worked at Kaleida [1], a joint venture of Apple and IBM, alongside Taligent [2] their AIM Alliance [3] sister company, I laughed at the old joke:
Q: What do you get when you cross Apple and IBM?
A: IBM.
But then the joke was on me when I finally worked for a company owned by Apple and IBM at the same time, and experienced it first hand!
I gave Lou Gerstner a DreamScape [4] demo involving an animated disembodied spinning bouncing eyeball, who commented "That's a bit too right-brained for me." I replied "Oh no, I should have used the other eyeball!"
Later when Sun was shopping itself around, there were rumors that IBM might buy it, so the joke would still apply to them, but it would have been a more dignified death than Oracle ending up lawnmowering [5] Sun, sigh.
Now that Apple's 15 times bigger than IBM, I bet the joke still applies, giving Apple a great reason NOT to merge with IBM.
[1] https://en.wikipedia.org/wiki/Kaleida_Labs
[2] https://en.wikipedia.org/wiki/Taligent
[3] https://en.wikipedia.org/wiki/AIM_alliance
[4] https://www.youtube.com/watch?v=5NytloOy7WM&t=323s
[5] https://news.ycombinator.com/item?id=5170246
No wonder Vault is now trying to get me to pay $50 a month to store more than 25 secrets. Enshittifucation has already begun.
IBM has been on a shopping spree of garbage companies. Which tools are next?
Atlassian?
on the bright side, it could kill jira
Too big. Maybe redis?
Seems too big. Market cap is 1/3 of IBM's.
TeamForm, ServiceNow, Workday
Wordpress!
Will terraform still work for GCP and AWS?
[flagged]
I was really hoping this wouldn't happen given one org (IBM) effectively controls both Terraform and Ansible.
Salt and Puppet both don't seem in a great place.
System Initiative is just AWS still, yeah?
Welp.
OpenTofu is doing just fine ! Give it a whirl.
Is this even the same company that created Terraform? HashiCorp's trajectory has been baffling...
Ansible + Terriform synergized become Terrible.
I chuckled, nice job on the name
That said, I think a playbook in HCL would be worlds better than the absolutely staggering amount of nonsense needed to quote Jinja2 out of yaml
I would also accept them just moving to the GitHub Actions style of ${{ or ${% which would for sure be less disruptive, and (AIUI) could be even opt-in by just promoting the `#jinja2:variable_start_string:'${{', variable_end_string:'}}'` up into playbook files, not just in .j2 files
https://docs.ansible.com/ansible/11/collections/ansible/buil...
Turns out, nobody's quite figured out how to successfully charge for free shit, but it's moot when you can just burn venture capital for ten years until you get acquired and chopped for parts.
One of my friends was in management at HashiCorp and what he told he was there were a series of bad internal promotions to product management and heads of development that tanked the company. At the same time there was a huge problem with leftist activist employees taking the company for hostage, not surprised they got scooped up for pennies on the dollar.
I love it when IBM tries to stay relevant!
Sad!
> By 2028, it is projected that generative AI will lead to the creation of 1 billion new cloud-native applications.
lmfao what the fuck? The source they reference: https://www.idc.com/getdoc.jsp?containerId=US51953724
These clowns want $2500 goddamned american dollars for the privilege of reading their bloviations on this topic, which i absolutely will not pay.
You know it's bad when the only people making money on this crap are management consultants.
Thinking back to 2014 using vagrant to develop services locally on my laptop I never would have imagined them getting swallowed up by big blue as some bizarre "AI" play. Shit is getting real weird around here.
> These clowns want $2500 goddamned american dollars for the privilege of reading their bloviations on this topic, which i absolutely will not pay.
You aren’t the target market for their “bloviations” - they are targeted at executives, and it isn’t like the executive pays this out of their own pocket, there is a budget and it comes out of the budget. Plus these reports generally aren’t aimed at technical people with significant pre-existing understanding of the field, their audience is more “I’m expected to make decisions about this topic but they didn’t cover it in my MBA”, or even “I need some convincing-sounding talking points to put in my slides for the board meeting, and if I cite an outside analyst they can’t argue with that”
Commonly with these reports a company buys a copy and then it can be freely shared within the company. Also $2,500 is likely just the list price and if you are a regular customer you’ll get a discount, or even find you’ve already paid for this report as part of some kind of subscription
Anyone prioritizing this nerfed, mindless dogshit over what their team is telling them and what's going on in the world around them is both an incompetent leader and a total idiot
A lot of the people paying for these analyst firm reports are sales people-so they can pass them on to their customers/prospects (to legally do that you often have to pay extra for “redistribution rights”)… and then the customer/prospect gets to read it for free
Who might not have much of an engineering team, or not one with relevant expertise… and why should they trust the vendor’s engineering team? If they are about to sign a contract for $$$, being able to find support for it in an independent analyst report can psychologically help a lot in the sales cycle
While the most useful reports for sales are those which directly compare products, like Gartner Magic Quadrant or Forrester Wave - a powerful tool if you come out on top - these kind of more background reports can help if the sales challenge is less “whose product should I buy?” and more “do I even need one of these products? should we be investing money in this?”
[flagged]
It has never paid my bills, in that I've never worked for an analyst firm.
My bills have been paid by working for vendors, where I have seen how sales and marketing use their reports in action. I have seen the amount of effort engineering and product management put in to try to present the best possible vision of their product and its future potential to these analysts. (I've never been personally directly involved in any of those activities though, I've just observed them from the margins.)
But, it isn't like the vendors have a huge amount of choice – if you refuse to engage with the analysts and utilise their reports in your sales cycle, what happens when your competitors do?
This sort of thing is why nobody gives a shit about IBM anymore and they have to keep just buying relevant companies to stay relevant.
Hopefully they do the right thing and hand hashicorp over to Redhat so they can open source the shit out of it. So they can do things like make OpenTofu the proper upstream for it, etc.
Every time ChatGPT outputs a Dockerfile, it counts as a cloud native application, right? :)
Did you intend to reference "It's a wonderful life?" When I read your comment I imagine a tiny child in Jimmy Stewart's arms, exclaiming the joys of capitalism ;-)
No but that's a funny coincidence :)
Who the heck is IDC's customer base, exactly? $2,500 for that, or $7,500 for this one about – drumroll, please – feature flags!
"Modern digital businesses need to be able to adapt to changing end-user demand, and since feature flags decouple release from deployment, it provides a good solution for improving software development velocity and business agility," said Jim Mercer, program vice president of IDC Software Development DevOps and DevSecOps. "Further, feature flags can help derisk releases, enable product experimentation, and allow for targeting and personalizing end-user experiences."
https://www.idc.com/getdoc.jsp?containerId=US52763824
Something I always respected in Americans is their talent for making money from absolutely nothing, providing zero or negative value in the process of doing so. Obviously doesn't apply to everyone, but you have more than a fair share of these people.
Only in america: https://en.m.wikipedia.org/wiki/Charles_Ponzi
[flagged]
Relatively few IDC clients are paying retail for single reports other than reprint rights. They're clients with broad employee access to events and reports in various areas. Had access for many years and, yes, having (supposedly validated) data is more or less essential for lots of presentations and other types of documents because, otherwise, your claims are viewed as pulling stuff out of you rear end.
So if you just point to something an IDC - analyst? has pulled of their rear end... is alright? ;-)
> more or less essential for lots of presentations and other types of documents
Wait. What? This reminds me of the trope of the "wikipedia citation" in high school and college.. that move was worth at most a C+. Are you seriously saying these fucks actually seriously cite this bullshit? In this day and age where even crowdsourced wiki articles seem "credible"? What the actual fuck? I hate this shit.
I'm clearly in the wrong field.
The analyst biz doesn't actually pay especially well--at least compared to big SV-based companies.
Yeah but I could make this shit up in like.. 15-20min/mo vs working for a living like a normal human person. I'm just imagining the sheer number of vertical feet of skiing I'm missing out on and seeing red.
[flagged]
[flagged]
Hashicorp blog post: https://www.hashicorp.com/en/blog/hashicorp-officially-joins...
IIRC HashiCorp's first product was Vagrant, which was written in Ruby to wrap the CLI args of VMware and other VM hosts.
So, what is the practical TL;DR for everyone who isn't neither an employee nor investor? Hashicorp kinda made a lot of significant stuff, but that stuff is mostly FOSS and the commercial product is very niche. I am kinda surprised IBM even bought it, because it isn't very clear to me, how commercializeable this stuff is. So what does it mean? Will IBM most likely kill some FOSS products? Is this even possible? Were, say, terraform or nomad developed mostly by internal devs, or is there a solid enough community already to keep up with development or simply fork the tool if things go south?
tl;dr: charging money for free software is, like, really hard.
Well, darn.
The consolidation of power and IP into just several tech companies is worrying to me. Having the misfortune of working at IBM for just a few months, IBM leadership will give it the RedHat treatment. The dinosaurs at IBM will shelve their IP, and sell it for parts. Maybe Bloodmoar will buy up the rest and squeeze whatever remaining profit from acquisition.
If given the chance, just take the exit rather than trying to integrate into IBM.
>IBM leadership will give it the RedHat treatment. The dinosaurs at IBM will shelve their IP, and sell it for parts.
As someone working at Red Hat since before the acquisition, this does not match my experience of "the Red Hat treatment" even a little bit.
I don't doubt that they've handled acquisitions badly in the past but they did a decent job leaving us alone.
Biggest change I've seen is the intranet page now has an option to use IBM's single sign on in addition to RH's single sign on.
There have been larger changes in areas that the SEC could point their fingers at, to make things more uniform between IBM and Red Hat. Sales also had some changes on both sides.
For engineering almost no difference other than switching to Slack.
[flagged]
Sad to see this, but I will again shill for HashiCorp's Nomad as a better alternative to Kubernetes.
Nomad is way easier to self-manage than K8s, but GCP does that for me, with all the compliance boxes checked, for extremely cheap. Every cloud provider is in that boat. Nomad will be more work and more money, be it compute or enterprise fees. I'm sticking with k8s.
The convenience and ease of use isn't worth the increased costs?
Who's to say it doesn't cost more in time and effort?
[flagged]
[flagged]
I agree up to a certain scale. I've managed a large Nomad/Consul setup (multiple clusters, geographically separated), and it was nothing but a nightmare. I believe fly.io had a similar experience.
Having worked with a very large Nomad cluster, I cannot disagree more.
For simple use cases, sure, but you could also just use AWS ECS or a similar cloud tool for an even easier experience.
Can you quantify “very large”?
20k+ nodes and 200k+ allocs. To be fair, Kubernetes cannot support this large of a cluster.
Most of my issues with it aren't related to the scale though. I wasn't involved in the operations of the cluster (though I did hear many "fun" stories from that team), I was just a user of Nomad trying to run a few thousand stateful allocs. Without custom resources and custom controllers, managing stateful services was a pain in the ass. Critical bugs would also often take years to get fixed. I had lots of fun getting paged in the middle of the night because 2 allocs would suddenly decide they now have the same index (https://github.com/hashicorp/nomad/issues/10727)
Definitely not better than Kubernetes, but I don't regret working on it and I like it as a simpler alternative to Kubernetes. I remember trying to hire people for it and not a single person ever even heard of it.
> I remember trying to hire people for it and not a single person ever even heard of it.
I know, it's really sad. Kubernetes won because of mindshare and hype and 500,000 CNCF consulting firms selling their own rubbish to "finally make k8s easy to use".
I vastly prefer a hole in the head, personally.
[dead]
[flagged]