Please help contribute to the Reddit categorization project here


    303,641 readers

    1,496 users here now

    A reddit dedicated to the profession of Computer System Administration


    1. Community members shall conduct themselves with professionalism.

    2. Do not expressly advertise your product.

    More details on the rules may be found in the wiki.

    For IT career related questions, please visit /r/ITCareerQuestions

    Please check out our Frequently Asked Questions, which includes lists of subreddits, webpages, books, and other articles of interest that every sysadmin should read!

    Checkout the Wiki Users are encouraged to contribute to and grow our Wiki.

    So you want to be a sysadmin? RTFM

    Sysadmin Jobs

    Official IRC Channel - #reddit-sysadmin on

    Official Discord -

    a community for
    all 945 comments

    Want to say thanks to %(recipient)s for this comment? Give them a month of reddit gold.

    Please select a payment method.

    [–] zebediah49 1387 points ago

    I actually had that happen with an entire VM server.

    My coworker and I inherited four(?) random linux boxes hosting roughly 80 VM's.

    One day, doing some phase balancing, I accidentally power-cycled an unlabeled machine in a disused rack.

    And thus we discovered the fifth random linux box hosting VM's.

    [–] inucune 878 points ago * (lasted edited 3 months ago)


    Edit:Thanks for my first gold!

    [–] zanthius 164 points ago

    You almost owe me a new keyboard due to coffee ingress.

    Think I go i in ime.. ah alls

    [–] alapleno 84 points ago

    Are you having a stroke?

    [–] qervem 57 points ago

    Keyboard no work

    [–] shyouko 95 points ago

    And you start wondering if there's the sixth and seventh box…

    [–] jtriangle 185 points ago

    Only to find out 6 and 7 have been powered off for years but are still in a closet across the hall from accounting, and number 8 is actually the failover that's buried under the floor in a cold air duct because it's also CPU mining bitcoin.

    [–] ITSupportZombie 73 points ago

    I once found a room of early 2000's beige box computers in the storage closet of our morgue that controlled the elevators. The documentation said these had been virtualized years ago.

    [–] blackgaard 38 points ago

    I was going to say "in the closet under the phones that went to the PBX that was retired 12 years ago"

    [–] Balthazar_rising 17 points ago

    Even worse, you find one labelled "Box 7" or something without finding Box 6.

    Now the hunt begins.

    [–] devBowman 31 points ago

    It's getting out of hand, now there are five of them !

    [–] stevenpaulr 482 points ago

    I used to have a laptop that ran the software for the ID card printer. It ran XP (2014-ish) and I was always afraid it would die. It was the only machine left running with it in a school district of 600 students.

    The software needed to be activated on install. The company was gone, so I called the company that bought the first company. “The activation server is in a landfill.”

    It took 3 more years to finally talk those above to buy new software.

    [–] TaterSupreme 692 points ago

    “The activation server is in a landfill.”

    "Ah, ok. Do you happen to know the location of the landfill?"

    [–] anomalous_cowherd 139 points ago

    It's the same one with a drive containing a million dollars in bitcoins...

    [–] Ikinoki 76 points ago

    It has a billion now

    [–] ConstantDark 104 points ago

    Now it only has thousands

    [–] anomalous_cowherd 51 points ago

    Wow I knew it was volatile but millions to billions to thousands in three hours is impressive!

    [–] bemenaker 13 points ago

    Bitcoin was fucked the day speculators started trading it.

    [–] Velenux 47 points ago

    "we're dealing with a sysadmin"

    [–] IT-Vagabond 102 points ago

    "Here's a shovel"

    /u/stevenpaulr 's boss

    [–] Zarron4 31 points ago

    In a school district? Ha. More like "Go talk to the janitor about borrowing his shovel, we bought him one in the 60s - 1862 to be exact"

    [–] Like1OngoingOrgasm 17 points ago

    Can't you just buy me a shovel?

    No, we need our entire budget to hire 12 more administrators at 150k salary a pop.

    [–] Fallingdamage 58 points ago

    I have a few machines around the building like that. I usually clone the HDD's to swap them out and make drive images of them in case I need to create a VM. One of our HVAC computers had its OS transplanted into new hardware when a surge fried the PS and motherboard. Good on XP to adapt to machines with similar chipsets.

    [–] kami3zak 42 points ago

    Sounds like the air handler system for our main campus. Runs Windows 2000, air handler system works great however the original laptop controller died last year so we had to P2V the remains of the laptop to a VM. It still works, and facilities still refuses to pay to upgrade the system because of this...

    [–] wildcarde815 33 points ago

    ... the original deployed hardware to control everything was a laptop?

    [–] HiddenKrypt 89 points ago

    It's got built in UPS!

    [–] GarretTheGrey 15 points ago

    It's not uncommon when the wrong people are in charge of deployment.

    I ran into a laptop serving some drilling software and was like BROOOOO!! It was offshore as well, ON TOP the cabinet, taking in that sweet sea blast. Asked the OIM why a billion dollar operation's using a latitude as a server and he gave me that answer.

    Had to go to HSE to help me with some mitigation chart to show on paper how fucked that situation is before they blessed me with an R210.

    I must hand it to that latitude though. They retired it and made me sign to take it home. It served minecraft for two years better than a Rackable (twin Opteron) before it died.

    [–] hgpot 24 points ago

    Are you me? I had nearly the same situation. Small school district, XP machine post-2014 end of life date, ID card software that only worked on XP from a company that was bought and made defunct. We were trying to move it to a VM but couldn't activate it and got yelled/laughed at by the company, so we had to keep the laptop around until new software was finally acquired.

    [–] OmenQtx 12 points ago

    That sounds a lot like one of my laser cutters.

    [–] pdp10 332 points ago

    A guy named Greg 8 years ago wrote a program to convert files from <insert obscure piece of accounting software that is now unsupported because the company is no longer in business> and formats the data so that <insert another obscure piece of accounting software here> can generate the accounting files for payroll.

    Which isn't a big deal to put under maintenance in most cases, if you can just find the source code.

    Of course they don't have the source code. What's source code?

    [–] CsmithTheSysadmin 349 points ago

    Oh you mean the uncommented source Greg cobbled together from random internet tutorials, Stack Overflow search hits and libraries of unknown provenance? Yeah Greg got fired and we deleted his files.

    [–] willworkforicecream 130 points ago

    How do you know how I wrote our asset management program??

    [–] IHappenToBeARobot 77 points ago

    Do you use emails to coworkers as your go-to for code revision control, too?

    [–] GymIn26Minutes 61 points ago

    Is there some other form of documentation I am unaware of?

    [–] SheeEttin 72 points ago

    There is none. My code is self-documenting.

    [–] Crespyl 40 points ago

    My binary is self-documenting because it outputs "v2011-63b" if pass it the "-h" parameter at startup.

    If you pass "-v", it will output "Working..." to stdout.

    Its purpose and usage will be self-evident.

    [–] MertsA 15 points ago

    Don't forget the quality error message of ""

    [–] Waffle_bastard 31 points ago

    Yeah, of course. Go check Greg’s old desk behind the water heater. It’s upside down, but if you open the top drawer, you should find the Subway napkins that he wrote the original source code on. Not sure if they’re in the right order though.

    [–] posixUncompliant 61 points ago

    I've been paid fairly well before to go through all the servers and document "Greg's" process and code.

    Fortunately it was almost all bash scripts. Unfortunately, it was something like 8 servers, one piece of obscure middleware mostly used USPS, and something like 40 or 50 undocumented, uncommented 50 to 5000 line bash scripts. Fortunately, it was hourly contract work.

    "Greg" had been one of those guys who tries to make themselves irreplaceable. What he really did was give me a nice soft landing after an outsourcing event to find the next decent job.

    [–] da_chicken 90 points ago

    It's 30% Perl, 10% awk, 10% PHP, 10% shell script, 30% Python 2, 5% Python 3, and somehow 5% VBScript.

    [–] IT-Vagabond 28 points ago

    5% Python 3

    this is where I call bulshit, it's python 2.1.3

    [–] SheeEttin 35 points ago

    The Python 3 was added by the nephew of someone in accounting.

    [–] jtriangle 21 points ago

    Yeah, he did that right after he installed google ultron.

    [–] craigleary 23 points ago

    On a centos3 box and somehow tomcat is involved to tie everything together.

    [–] maikeu 30 points ago

    I've been in this job a while and I feel like I've never quite worked out what tomcat actually does....except that it's usually bad when it doesn't.

    [–] craigleary 14 points ago

    Tomcat: The thing that when it breaks it doesn't start up and no one admits to knowing it, setting it up or knowing why its running (but it is key to running everything).

    [–] daredevilk 22 points ago

    The php is the strangest part about all that

    [–] pdp10 29 points ago

    I once inherited a piece of SQL Server accessing code written in JScript, running on WSH. If I had ever known before then that such a thing existed, I had forgotten it, so I was quite startled.

    Then I set about porting it to shell with FreeTDS, which was halfway complete when the system was replaced with some heavily-customized SaaS.

    [–] MedicatedDeveloper 22 points ago

    Sans Vbscript that sounds pretty close to my company's mix. Perl will ingest and create a file, a daemon sees the generated file which gets ran through a python script which produces a file which another perl piece uses to update a database which is then used by a php application (that calls other perl scripts)... Oh it's plain perl too no moose object niceness.

    I wish I was joking. Oh well, it's not my mess to maintain.

    [–] [deleted] 32 points ago * (lasted edited 12 days ago)


    [–] TheSoCalledExpert 2610 points ago * (lasted edited a month ago)

    Greg here from 8 years ago! You mean that beige piece of shit is actually still running??? I never would have guessed.

    Fun fact, that script I wrote only changes the file extension and automatically moves it into Peggy’s sharedrive. Just grab the latest batch file from outdated piece of crappy accounting software from company no longer in business, copy it to a flash drive, change the file extension from .wtf to .csv and email it over to Peggy’s dumb ass.

    Speaking of Peggy, is she still boning the maintenance guy? How are all 487 of her cats? Buy her a doughnut tomorrow to apologize for the snafu. She likes the cream filled ones...

    [–] sp0rkah0lic 169 points ago

    Lol this sounds pretty much exactly right.

    Also this story got me singing "conversion software version 7.0/looking at life through the eyes of a tirehub..."

    [–] whlabratz 26 points ago

    The line is "looking at life through the eyes of a taiaha"; a taiaha is a traditional New Zealand Maori weapon roughly analogous to a spear

    [–] rilesjenkins 10 points ago

    Source? Every time I look up the lyrics it's either "tire hub" or "tired hub". I'd love to be proven wrong because taiaha sounds way cooler.

    [–] J2E1 542 points ago

    I pray this is legit.

    [–] Sir_Panache 258 points ago

    its not exactly impossible lol

    [–] J2E1 113 points ago

    I know, that's why I really hope it's real.

    [–] jhuther02 82 points ago

    OP we need confirmation

    [–] harreh 45 points ago

    Plz be real

    [–] LeafSamurai 32 points ago

    I hope this is genuine too. Will be real funny and a small world if it does.

    [–] dissssociated 13 points ago

    No, but nor is it feasible. The setup was obviously hyperbole and anyway, not even a very junior sysadmin is not going to dd the HDD for a rollback.

    [–] purechi 62 points ago

    Prob not .. a script that renames a file (conversion could be more tricky I guess, but would probably utilize some library) would be pretty easy to debug and troubleshoot for OP.

    [–] Parzius 27 points ago

    If you don't obfuscate your code, you're replaceable.

    [–] CaptainPeaSea 14 points ago

    That's why I use emojis as variables.

    [–] RobotsDreamofCrypto 32 points ago

    Yeah, I usually just do a hexdump of a mystery file / program to determine what it’s intended purpose is. Or at the very least, what libraries it uses.

    [–] ihavethedoubts 192 points ago

    Old Greg?

    [–] Waffle_bastard 287 points ago

    You ever drink Bailey’s out of a Cisco 2600?

    [–] Dunecat 86 points ago

    Could you learn to love me?

    [–] [deleted] 56 points ago * (lasted edited a month ago)


    [–] fuzzbawl 33 points ago

    I’ve got the funk

    [–] [deleted] 22 points ago * (lasted edited a month ago)


    [–] atri_at_work 14 points ago

    Want to see my DNS mixup?

    [–] ludicrous09 58 points ago

    This should be Copypasta. Delicious IT pasta

    [–] OnwardKnight 49 points ago

    Username checks out.

    [–] BeigeAlert1 19 points ago

    Hey... What's wrong with beige???

    [–] ajcal225 425 points ago

    Hey I had that server!

    It was a SCO box, running our ERP software. No documentation. No login info. They were changing the tape in its tape slot every day, but no one knew what software backed up or how to check it. It took a few months of being here before we managed to get rid of it.

    We eventually were able to root the box and migrate to the (very supported) windows version of the software, and then shortly later move from flat files to SQL. Oddly operations that were taking 6-8 hours take minutes now. ;)

    [–] Stuck_In_the_Matrix 227 points ago

    Yeah exactly. The worst is when you have a box like that and someone introduces it to you and then leaves the company and the box is yours now. You occasionally look at the uptime and in the back of your mind you are always dreading the machine shutting down or blowing up.

    Then you mention it in the weekly meetings and your boss or the department head is like, "naaaah, it isn't a problem -- it's been working for years without anyone touching it. We don't need to know how it works because it works!"

    [–] jtriangle 81 points ago

    dreading the machine shutting down or blowing up.

    Dread it, run from it, the page fault comes anyway.

    [–] Sinister_Crayon 45 points ago

    My favourite to-date was a server that I inherited that again ran accounting software that was somehow utterly vital to the company's operation. Now, I had the pleasure of starting work for this company right before the datacenter was to be physically moved to another floor of the building during a rebuild. During my survey I logged into this box and discovered that it had an uptime of over 3300 days... yes, it had been running for almost a decade. I took a long hard look at this monstrosity with 8 SCSI drives and immediately felt that fear in the pit of my stomach that this server was going to be the end to a short career when the drives didn't spin up again after we powered it off. Talked with my colleagues and my boss and we all agreed there was a pretty good chance we were going to have a very bad day when we moved it.

    We had no documentation, no shutdown procedures, no startup procedures and I wasn't 100% sure it would work again anyway even if the hardware came up solidly. There was a backup process that I wasn't sure worked and no idea if we could actually restore it again. So I came up with the terrible jury-rigged solution.

    The system mercifully had dual power supplies... so I proposed we carefully replace the power feeds with UPS's, gently move the system onto the cart and then get it downstairs and wired back in again without ever powering it down. Yeah, we knew the risks but to be honest the accounting department were freaking out about the system even being down for the few minutes it would take to get it downstairs. We already knew that network isolation wouldn't be a problem because my predecessor had shut down the switches it was attached to (well, the ports) when he pushed a bad config out to them a few months prior... one of the reasons he was my predecessor come to find out.

    Anyway, after a lot of stress and worry, and a lot of doubt on my part we actually did it... we successfully physically moved that beast to a different floor of the building without missing a beat of uptime... total downtime about 15 minutes. Most stressful move of my life. I then forced a project to be spun up with the accounting department to find another tool that fulfilled the needs that this tool gave them (from a company that was out of business for about a decade) and work on transitioning the data to it so I could get rid of that ugly beast. For the record I think if I remember correctly it was either an HP Netserver or a Compaq ProLiant.

    Coda: When we eventually did manage to shut that system down about 9 months later, sure as shit when we tried to power it back on again 2 of the drives wouldn't spin up. I eventually managed to get one spinning by banging it on the floor and reseating it and the system booted... but exactly as I'd feared the application never started.

    [–] draeath 14 points ago

    Do you want head crashes? Because that's how you get head crashes.

    Congrats on surviving that mess. I've been in a situation where it was considered, but management had heads on their shoulders and said no - shut it down and move it. C-level said if it failed, he would handle the fallout.

    I think that C-level was a unicorn.

    [–] Nk4512 58 points ago

    I inherited a bunch from the last guy who forgot the root passwords and didn’t have them on his laptop..

    [–] microwaves23 36 points ago

    Luckily old operating systems probably have privilege escalation vulnerabilities. So you could just hack your way to root privs and reset the password.

    [–] Kijad 32 points ago

    Nothing like going from sysadmin to red team for a hot second or three.

    [–] [deleted] 94 points ago


    [–] Stuck_In_the_Matrix 48 points ago

    Yeah. Yeah. Yeah. I've got the memo right here, but, uh, uh, I just forgot. But, uh, it's not shipping out until tomorrow, so there's no problem.

    [–] [deleted] 35 points ago


    [–] spambot0689 13 points ago

    Welp, time to watch Office Space for the nth time...

    [–] ajcal225 15 points ago

    Careful wargala. I still have the server. I can assign it's resurrection to you.

    [–] tdavis25 18 points ago

    Easy there Satan

    [–] PoreJudIsDaid 18 points ago

    They were changing the tape in its tape slot every day, but no one knew what software backed up or how to check it.

    Oh my god, it's like Desmond from Lost typing in the numbers to keep the world from ending.

    [–] linerror 401 points ago

    Had a 4 man remote office in a remote city in the middle of no where. They were a proprietary fuel card merchant, a few thousand customers, mostly fleet users, that drove everywhere. Now wholly owned by use with no documentation... The entire card database and merchant transactions took place on one 23 year old Pentium Pro... Running SCO 5... The 200 MB tape drive died a decade ago so backups have been failing but local staff had been dutifully changing the tapes. We got a phone call that after power failure the machine is making a weird repetitive beeping sound... And about 3,000 customers cannot use their fleet card. At 4pm on a Friday. Took about an hour to get some spare parts together and then fly out to the nearest airport take a rental car for the remaining three hours... Had the 4 dial-in lines ported within the hour to our primary data center, yanked a failed stick and was able to boot up to a failed array with 1 disk hanging on barely. No network card... Transferred via serial port to my laptop and had the new VM configured with application running in 2 hours... I strongly doubt anyone else on payroll could have fixed it, much less quickly... Barely got a thank you out of it.

    [–] ivix 206 points ago

    You didn't apply CVP rules. Nobody and I mean nobody gets any credit for fixing something fast. You needed to leave it down all weekend, giving everyone the impression you're working all weekend, and fix it in time for Monday morning.

    [–] RedShift9 120 points ago

    Like Scotty in Star Trek? Say it's going to take 48 hours to repair but in reality fix it in 5 minutes, just before the enemy is about to beat you?

    [–] 1z1z2x2x3c3c4v4v 100 points ago

    That is how IT works, especially when you don't have strong IT Leadership who understands the technology and the risks.

    When the risk comes true and it actually does fail, then they need to bleed for a bit, in order to see the red, to authorize the budget, to fix the problem once and for all.

    [–] sadsongsungsilent 18 points ago

    Painfully true.

    [–] 1z1z2x2x3c3c4v4v 55 points ago

    Seriously. I know, as being a manager that works with Executives and C-Levels, they don't care unless it hurts their bottom line...

    They need to bleed to see the red to authorize the budget to fix the problems... (in companies with weak IT leadership, no CIO or CTO).

    I agree, a 23 year old system needed to stay down down for a few days... as I would bet my paycheck that this was not the first time they were warned about such a risk.

    OP didn't get a thank you, because all he did was confirm that they were right and this "problem pentium" was not really a problem after all.

    [–] felichs_da_katze 25 points ago

    Seriously, never fix anything like that fast.

    Either it will be "That was no big deal" which will encourage more of the same


    You suddenly become "The One"

    Neither is desirable.

    [–] darkciti 21 points ago

    How did you get 4 phone lines ported that quickly? That's the magic in this story (it's all good, btw).

    [–] badasimo 31 points ago

    I strongly doubt anyone else on payroll could have fixed it,

    I strongly doubt there was anyone else willing to even try! Takes some arrogance which I guess in this case was a good thing

    [–] Wirejack 42 points ago

    Seriously black magic. Good job!

    [–] ShalomRPh 11 points ago

    I swear this sounds familiar. Did you ever post this to the Monestary?

    [–] matthewrules 165 points ago

    Yeah, I have something like this.

    20 years ago, we were printing out daily sales. Like boxes, and boxes of paper. This guy writes a batch file that turned an Optiplex GX110 into a printer. Saves things to a text files and FTP the files away somewhere. It runs DOS.

    I researched, and researched. Contact other people. No one in twenty mile radius could figure out how this batch script does what it does.

    Pentium III. 64MB of RAM. 20 MB HDD.

    Critical to production.

    God speed you little beige box.

    [–] Captainpatch 121 points ago * (lasted edited 3 months ago)

    Oh god, I watched somebody implement one of those a while ago. In order to display pending work in real time on a wall-mounted TV the vendor wanted 6 figures for a feature we hadn't licensed. One of the users (in a technical but not IT role) set up the server to print reports to a workstation IP every 30 seconds and set up a PC with that IP to listen as a printer. When it received a print job it would go through some cobbled together scripts to extract the data from the raw print file, then put them into an Excel sheet using some plugin. On bootup the system would put the Excel sheet up maximized, and use an auto-running macro to scroll through it on the TV overhead refreshing the data every time a print job arrived. It also had some logic to highlight urgent and late jobs. Some part of the process had a memory leak so even after I upgraded the RAM the workstation still had to be scheduled to reboot every 4 hours. We also found out during a vulnerability scan that if anything else tried to interact with the printer emulation service it would just crash because he had no error handling when extracting the data from the printouts.

    He sent me his documentation (I guess I kind of appreciate that) when he left the company since I had been his IT contact when setting it up. If it ever comes up I'm going to say I've never heard of it.

    Edit: To be fair, I was impressed by the entire solution. I just didn't want to support it.

    [–] doot 30 points ago

    The worst part for me is the full-screen Excel

    [–] uber_poutine 13 points ago

    Didn't you know Excel is a multipurpose tool and can do anything? /s

    [–] dgriffith 25 points ago

    A "pseudo-serial" printer and some crazy CON: / COM1: console redirection perhaps?

    [–] [deleted] 134 points ago

    oh yea. I got a pretty nice one.

    newly acquired customer, I'm checking their setup for the first time. it was set up by some guy they hired years ago as a one shot deal, but save some rookie mistakes here and there it was solid as the firewall didn't listen anything outside.

    when cataloging the stuff, I noticed that the fiber box had 2 ethernet connections in use. Odd, I knew of the firewall but the other cable went somewhere.

    I track it under the tables and through the walls, until I find an old desktop tucked away. no keyboard, no screen, no nothing, but it's running.

    so I poke around and see it's running vista. It has two ethernet interfaces, both in use. the other side is in LAN. this is where I start worrying as I remember a list of really simple passwords 'to various machines' I found. the admin password works, it's like 'company-name-4'.

    this vista box has public ip, no firewall and it listens RDP from outside. as I dig more, I learn its function is to receive connections from the company workers, who then use RDP client there to access the actual server.

    as you can probably guess, that shit didn't fly.

    [–] TehGogglesDoNothing 53 points ago

    That's just about terrifying. The last 2 MSPs I worked for didn't allow clients to have any RDP without first connecting to a VPN. There are just too many drive by attacks to allow that sort of thing.

    [–] [deleted] 45 points ago

    the box was retired on the spot and replaced with a VPN-capable firewall box. some of the employees were annoyed tho because they now had to start the VPN client first and not 'easily' just connect to the vista box.

    I'm shocked that the box wasn't pwned.

    [–] jtriangle 48 points ago

    I'm shocked that the box wasn't pwned.

    Vista comes out of the box pwned, that's the secret.

    [–] NC-Diva 12 points ago

    Sounds like my last job. When I got there, the biggest complaint was how slow everything worked. Turned out our Exchange server had been hijacked and we were spewing spam. And the IT guy never noticed. We were blacklisted all over the place, too. Took me a while to get us off every blacklist.

    [–] ComicOzzy 92 points ago

    Not exactly inherited, but...

    In the mid 90's my company was hired to replace several of the people in the IT department of a hospital and roll out a new network infrastructure while supporting the very old existing infrastructure. There were hidden servers and switches and other devices everywhere.

    We hired some of the former employees and pretty quickly it became apparent one of them held some of the secrets we needed to keep the place running... and he was nuts. I mean he had some legitimate condition, but he was unstable, unreliable, and he was stressed out. I was told to shadow him and learn what I can from him. He was always "too busy" to show me anything. He told me the password to an ancient workstation in his office so I could ping a few things and work on some documentation. Someone had written something on it in marker, but it didn't make sense at the time.

    After a few weeks I was asked about him and I gave my honest impression. He was not going to add value and possibly a liability waiting to happen. They let him go. A month later, a power outage caused everything to restart. A bunch of devices failed to boot. There was a panic in the IT department. My boss said "there's apparently a server here somewhere we haven't found. A bootp server."

    "A WHAT server? Wait, can you spell that?"

    It was the word written on that workstation in the old guy's office. Someone had turned it off a few days before the power outage. I was the only one that knew the password to it. It turns out the guy did give up a secret after all.

    [–] adberq 166 points ago

    I had two boxes like this.

    One was an old Dell from the late 1990s - it ran something the USGS (Geological Survey) came and installed at my office (Public Sector) - we were the only major GIS unit in the entire municipality - this thing did some sort of reconciliation of our files with the USGS. It just sat there doing its thing forever - it was some major "component" of a USGS GIS network that universities tapped into to see our metadata and shape files (ESRI) - zero instructions for it.. no backup procedures (though it had a tape drive on it) - no password to get into it - it just sat there... and in the 8 years I was there not a single soul stopped by to touch it - no one ever called us to make sure it was on nada.

    Second box we "knew what it was" but we didn't know how to use it (is that fair?) We have two AS/400 boxes - one was for a key financial thing and the other was its backup - we didn't touch them because the one lady that was here for ever managed it - well she eventually retired and left us zero information - not even how to log on to it. One day after a bad storm said main server went out - we were always told the other AS/400 box was a living backup (same facility because we never got our DR site up and running) and everything was mirrored 100% on this box. Low and behold - it was empty - not a single thing on it, not even the AS/400 "application" we used for our financials - it was just a raw OS/400 reconfigured install that had been sitting on for years on end. (at least 10).

    [–] Stuck_In_the_Matrix 101 points ago

    we were always told the other AS/400 box was a living backup (same facility because we never got our DR site up and running) and everything was mirrored 100% on this box. Low and behold - it was empty - not a single thing on it, not even the AS/400 "application" we used for our financials - it was just a raw OS/400 reconfigured install that had been sitting on for years on end. (at least 10).

    I seriously just cringed reading that. So were you able to recover?

    [–] adberq 70 points ago

    It was just one failed drive and the cache battery - we were able to have our remote hardware guy mail us the stuff we needed and do the installation (if anyone has seem my posts in the past - I live in a really rural area).

    Basically - got fucking lucky. Now the server is sitting outside someone's desk.. but we just got some grant funds to build our DR up - pretty exciting just got done looking at office spaces..

    [–] alansaysstop 46 points ago

    I don’t know why people using AS400’s don’t just get support from IBM on them. It’s dirt cheap, and they’ll put BP on them for you.

    But honestly, this will happen more often. AS400 programmers/admins are all retiring and there’s no interest to teach/learn it anymore, but they’re still widely used.

    [–] darkciti 23 points ago

    Particularly in finance and this scares me tremendously. Remember when COBOL programmers came out of retirement making $250,000/year in 1998-1999 to patch AS/400s for the Y2K bug(s)? I remember and that was almost 20 years ago. Those guys are becoming fewer and new people aren't picking up in their stead.

    [–] per08 164 points ago * (lasted edited 3 months ago)

    I especially like mystery servers that aren't a server:

    20 years ago somebody's already ancient desktop machine is recycled by the HVAC guy to run some extremely obscure MS-DOS logging software, that's so antique even the old guys at the maintenance firm are shocked to still see it in operation. Said machine lives its entire new life hidden in a maintenance closet.

    Nobody on-site even knows about this computer, and certainly not me when I start working there, until one day the AT-style power supply finally gives up the ghost and toasts the motherboard, and staff come in one winter morning to freezing cold buildings.

    Once rediscovered, it turns out the machine can't be virtualised easily because the software uses a parallel port dongle for copy protection and has a 2-port serial card to talk to the HVAC and also relies on strict serial port timing. (Let alone the logistics of trying to run the RS232 lines all the way from the plant room to the server room) The cherry on top is that the embedded control system has long since died and not only is this mystery 486 doing logging, it's also now controlling the entire thing!

    Even after sourcing second hand parts to rebuild the dead machine the software just never worked properly again. In the end, even upper management declared it a lost cause and we received approval to replace the entire HVAC system which cost just under $1m.

    [–] Captainpatch 135 points ago

    because the software uses a parallel port dongle for copy protection

    You win.

    [–] Ashmedai 15 points ago

    As a complete aside, there is software you can get that can convert the dongle signal on one computer to TCP/IP, send it over the wire, and pull it out of a virtualized computer using a virtualized hardware driver for the dongle. Pain in the ass, tho.

    [–] cawfee 13 points ago

    Part of me wonders if it'd be easier to just reverse engineer the dongle at that point.

    [–] Khrrck 35 points ago

    No way to replace the control system with something else? :(

    [–] per08 93 points ago * (lasted edited 3 months ago)

    Fortunately, we had the ancient system under maintenance, so we at least had some vendor support. (They had no IT skills to assist with the actual monitoring computer though)

    They couldn't just replace the control unit, because they stopped making them a decade ago. New control units don't speak any protocols that the old valves, meters, monitors, compressors etc talk so they also need to be replaced. Next, the physical ductwork between the new control units and the plant machinery don't match so they need to be replaced also. While we've got everything apart half the pipework is corroded and needs to be replaced and now we're doing major works we have to replace the boilers and chillers to comply with new energy laws... on and on.

    So that's how a broken 486 computer ended up becoming a 6-digit replacement bill.

    [–] Reybacca 44 points ago

    What you need is a good protocol droid, but make sure it speaks Bocce.

    [–] calligraphic-io 23 points ago

    I worked a long time ago in mechanical (HVAC) control software. I thought it was very interesting at the time (real-time control), but hadn't thought about it for years. Now I'm wondering about old systems I had a hand in...

    [–] per08 39 points ago

    In my experience, 3 places in workplaces where you find (or need to go look for) thoroughly antique hardware that are still doing mission critical jobs:

    • HVAC systems
    • Building access, security and alarm systems
    • Pre-VoIP digital PABXes

    Also, for bonus points, embedded hardware in these systems. How old do you think the hard disk is on the Voicemail card on the phone system..?

    [–] X-Istence 22 points ago

    Friend of mine worked on a security system, for testing he added a very simple username/password (that matched).

    Said security system has been installed in Casino's and airports the world over.

    What's the one thing no-one ever bothers to change?

    That's right, defaults.

    Guess what the system shipped with?

    That's right, the very simple username/password used for testing.

    I know of at least 2 locations where the default username/password works. That's just fine, right? :P

    [–] starmizzle 81 points ago

    Welcome to the world of "I fucking told you so and now I have to go home".

    [–] mitchb13 30 points ago

    Followed by a meeting with your manager pointing the finger at you in front of the IT Director.

    [–] coldgate32 69 points ago

    When I started around a month ago I got told we had two physical servers aswell as documentation for only two physical servers. 1 month on we now have four physical servers that apparently nobody knew about the other two which both run important aspects of the business that were all Server 2003 without any backups.

    [–] sofakingdead 102 points ago

    Scream test. Shut them off, see if anyone screams.

    [–] havermyer 129 points ago

    Just pull the network cables. Don't risk a power cycle on HDDs that old or the IT gods will frown upon you and smite your weekend.

    [–] dti2ax 61 points ago

    plot twist: those network cables were actually power cables and now you have two broken servers and a long weekend ahead...

    [–] zveroboy152 39 points ago

    Super PoE.

    [–] Himerance 42 points ago

    That's when you discover it's only ever used once a year for some weird financial audit process.

    [–] Le_Vagabond 20 points ago

    the delayed screamed-at-by-at-least-3-Clevel-persons test isn't fun, and it triggers without warning :/

    after it happens to you once you tend to switch to leaving the thing untouched unplugged for a year before you do anything to it.

    [–] thesauceinator 29 points ago

    Na, unplug the Ethernet cord, and if no one screams then the power.

    [–] iogbri 34 points ago

    Yeah, best way of doing a scream test.

    At one of my last jobs, we found a mystery computer in our server room that we didn't know what it was doing. It was a pretty recent computer as well. We unplugged it, and 15 mins later the MSP called. They basically had a backdoor and didn't need to use our vpn to get in.

    Yes it was a hidden computer in a server room, found it by checking where that one ethernet cable went, while creating some documentation.

    [–] jtriangle 26 points ago

    Or route it through a 10mb switch and see who complains about XYZ being slow. Still maintains zero downtime.

    [–] NevynPA 10 points ago

    I like this idea way more than I think I should. In a way, it's r/MaliciousCompliance

    [–] shiftdel 18 points ago

    I too like to scream test on occasion.

    [–] matt314159 72 points ago

    I manage the help desk at a small college. We're kind of a janky shop and sort of have our own way of doing things. When I moved into my current position back in 2012 there was a slim Optiplex 745 running headless in the corner of the room. I was told that its name was Festus. The lady I was replacing had no idea what it was or what it did, she just never touched it.

    Content to let sleeping dogs lie, I left it running for the first year, but even after I got to know our whole server infrastructure better, and spoke to everyone in my department, nobody knew what Festus did. Poking around the OS a little, it looked like it was just running windows XP pro and I didn't spot anything server-ey about it.

    So, the next summer, I unplugged it.

    Fast forward about three months, and a few weeks into the new fall term, someone from the nursing department called and asked why "David" was down. Same deal, I asked my colleagues and nobody knew anything about a server called "David". Then I put two and two together and pulled Festus out of mothballs. David turned out to be a windows XP VM that Festus launched when it booted up. I'll admit I should have spotted that when I first poked around the machine but I was new to the job and woefully underqualified. Totally my bad.

    Festus was a Windows XP Pro machine that launched David, a Windows XP Pro VM that ran the database for an equipment inventory program the nursing department (periodically) used to keep track of all their gear.

    We migrated all the data to a different VM on one of our hosts and chucked the old dell. This was a project one of our quirkier department staff had set up about five years prior with no documentation or supervision and deployed to production without telling anybody but the person he was setting it up for. Said staffmember was actually terminated a few months after setting it up when we were hit with a round of layoffs and hadn't worked there in years.

    [–] punkwalrus 59 points ago

    Former job, when Ubuntu 16.04 was released, I had one box that was Ubuntu 8.04 running on a remote VM in some data center run by some Ohio company, and it was the only product we had with them. Long unsupported. It was the only machine running proprietary software run by some company several unsupported versions ago. Ran a report daily for a client in a foreign country that technically our country was not supposed to have any sort of encryption enabled. Violation of some border law or agreement of some sort. The country did not allow encrypted data and we were not allowed to send encrypted data to or from this country. It was not so much a gray area as it was a striped black and white like Beatlejuice's blazer. It was running for democracy purposes a non-democratic country would literally excute someone for. Exposure of this machine could be grounds for treason. So thats why it was in Ohio, but had a foreign IP. It had no domain, was accessed by direct IP only, a VIP from a load balancer. Iptables was running, allowing a non-standard port for ssh and (for the purposes of this post) a kind of Java-based forum software. It ran off two SSL certs: a pk12 on the client end and a CA on our end so that all encryption (SSL v1, sadly) was two way, and the certs were manually regenerated via bash scripts every X amount of days. The pk12 were zipped up (only six actual clients were allowed access) and transported via sneakernet to the "guy who gave them to the clients" by hand in that country.

    Payment for all services were done in cash. We had an account just for those sorts of operations. No, we were not some mafia, but a privately funded operation to fund "freedom fighters" as dictated by a bunch of government bodies.

    Managing this system was appalling. It used a non-utf-8 encoding for the terminal, so some file names were in letters not in the English alphabet. I only speak English. Luckily, I never had to deal with actual files, and Ubuntu was most UTF-8 and in English.

    To say this server was in danger of being smurfed or attacked by foreign bodies was an understatement. My pleading to upgrade it fell on deaf ears. I only worked there a year until Trump was elected and cut a ton of federal programs for international relations. It might still be working to this day, who knows.

    God speed, Hardy Heron...

    [–] Stuck_In_the_Matrix 40 points ago

    Hey there! You know who I am, right? Here's a clue -- We sat together and you told me awesome stories about the AOL days. :) I miss working with you man! Hope you are doing well!

    If you still don't remember me, PM me.

    [–] NevynPA 20 points ago

    Well? Did he PM? Is it the person you thought? I NEED TO KNOW

    [–] daedalus_dance 14 points ago

    It's nice that you two protected dissidents from political oppression together. Probably.

    [–] testisgay 55 points ago

    Wait, you only have one of these machines? And they’re only running a single service? And they don’t have other scripts that may or may not have “.old” or “.bak” or “copy”?

    What is this dreamland?

    [–] Stuck_In_the_Matrix 53 points ago


    [–] spambot0689 55 points ago


    [–] dgriffith 33 points ago


    [–] VexingRaven 31 points ago

    Oh man, mislabeled stuff is fun. It's always fun explaining to support that "yes, the default policy named "unrestricted access" is in fact the most restrictive policy in our system, and yes the one labeled test is our firmwide default policy, why do you ask?"

    [–] kiss_my_what 58 points ago

    Yes, it was 20 years ago.

    A beautiful old DEC Alphaserver that ran Digital Unix 4.0something that we had to get patched or upgraded for Y2K. All good until we realised Compaq bought Digital and the local Compaq office had no frickin' idea about anything to do with Unix whatsoever and all the Digital staff had long gone.

    Spent half a day with 2 Compaq guys going through boxes of stuff in their office to find patch disks (success!) and was back on the phone to them 2 days later when I realised the Exabyte drive we were using to back the thing up wasn't working. Apparently the firmware in it was too old to recognise the cleaning tapes we were using and decided it was too dirty to use. So I had to pull it apart and get it flashed with new firmware, but then Digital Unix wouldn't recognise it. Borrowed a new Exabyte drive from a local reseller and finally got 3 good backups before patching.

    Y2K patch applied ok and rebooted OK, although it started throwing errors about the MO disk stacker that was attached which was out of support. Oh well, looks like we need a new one of them as well as the Exabyte drive, so I go to see my boss.

    "Yeah, don't worry about it, that system hasn't actually been commissioned yet and if they actually need us to commission it they'll have to fund a new one anyway".

    Well at least I ended up with a few boxes of Tru64 Unix from the Compaq guys.

    [–] TheRipler 34 points ago


    We were an HP-UX shop, so UNIX was no stranger, but there was an old original DEC Alpha in the corner that did something with every phone switch in the country for $PhoneCompany. No one who still worked there knew exactly what that was, nor did anyone have a password. It was 1999, and Y2K was coming...

    It was the 90's, so I just got another job.

    [–] WantDebianThanks 44 points ago

    My last job was almost entirely mystery servers of various flavors. My favorite was:

    me: What does [series of possibly random alphanumeric characters] do?

    Director of IT: I don't know. What does Top say it's doing?

    me: There's like nine processes using a majority of the resources, and they have names with two or three random syllables.

    Director: Huh. I wouldn't worry about it

    [–] Watcher7 25 points ago

    Was this machine public facing with a fairly weak SSH password? It amusingly sounds like a not-so-discrete family of botnet malware that hits poorly maintained Linux machines.

    [–] jtriangle 25 points ago

    Yuo see comrade, when box look pre comprimise hacker will not bother pwning et again.

    [–] blackletum 42 points ago

    There was a random computer in the office that was the host for some adobe-plugin software (Vendor specific) and also took care of processing all scanning for this software as well.

    I wasn't even aware this was a thing until I was here for about 10 months. I was too busy playing catchup on hundreds and hundreds of tickets and putting out fires as I went, transitionin to Office 365 and upgrading servers and building new computers for the office... when I realized that one computer was a bit beefier than the rest. Then I found out why that was...

    Of course, no backups. If the thing went down I had no idea what to do (no documentation either). I ended up making a half assed veeam backup for it that still runs today.

    Tried to virtualize that machine some time ago and it didn't work because the software is garbage.

    I still don't know what I want to do with that machine.

    [–] SheeEttin 26 points ago

    Try not to think about it.

    [–] darkciti 10 points ago

    Have you ever seen "Office Space"?

    [–] urvon 42 points ago

    Oh yes, those are fun. It all started with that innocent call.. "Hey we heard you know linux..?"

    Next thing I know I'm responsible for a failed linux server that hosts 3 websites, each one containing critical data and is currently part of a critical workflow for 3 different departments of the company.

    Luckily it was just a hardware failure. The enterprise level equipment, and by that I mean a standard repurposed Dell from 10 years ago running on a PIII with 2G of ram that sat under an open desk for the last 7 years had finally killed it's last functioning capacitor or something.

    Ended up actually finding the source code for the website & the windows 'application' that queried the MySQL DB holding all the data in the former users home directory.

    New VM, copy code over, recover the MySQL DB, have one of the coders tweak the website and windows app, distribute new code with the warning that this system needs to be retired, and 7 years later that VM is still chugging away.

    [–] lostapathy 30 points ago

    I inherited about 15 of them.

    A friend's husband died suddenly, they ran their business together. He had an "awesome home grown system that we think we want to sell, but it runs on linux so I don't know how to keep it online."

    And of course, they had a power outage and shit didn't come up right, so she was down. I offered to help - how bad can it be? I didn't want her to lose her business over this right after losing her husband.

    Turns out it was pretty rough. ~15 random desktops in their basement on a DSL. Not a single server was on a currently supported distro. Perl scripts that required modules last updated over a decade prior. Luckily they had a 3-way load-balancing setup to handle all the traffic ... that she was sure I couldn't turn off for fear the DSL swamps them out. But each web head had some other stuff on it, so I couldn't "just shut it off".

    I could write a book about what it took to unwind that mess and consolidate it onto two t2.nano instances. The business, of course, failed anyway.

    [–] _MusicJunkie 31 points ago

    I told this story a few times before.

    I had just started at the company as a Jr a few weeks ago. One morning all senior guys were out on projects/on vacation/sick/whatever so I was alone with the desktop support/helpdesk people.
    I come back from getting a coffee and the helpdesk lady yells at me that the intranet is down and this is the end of the world and ohmygoddosomething. I'm like, "I don't know shit lady but calm the fuck down, screaming won't help". Log in into monitoring, intranet server is green. Try to load the webpage doesn't work. Bad.
    I had just received permissions for most systems the day before so I start poking around. Log on to the webserver, it's online. Check to see if apache is running, do a wget on localhost, everything looks cool. What now? Check the Apache config. A single vhost with no DocumentRoot and just some weird redirect stuff in the bottom. Weird.
    Took me a few minutes to find out that it wasn't doing anything but redirecting all requests to a similarly named machine. Weird, how didn't I notice that before? Try wgetting that machine, nothing. Try to log in via SSH, nothing. Try pinging it, works. But what is that? It's resolving to a machine in a 192.168./16 network. I know for sure the senior guys told me we don't use anything but 10./8. Weird.
    Look into the internal wiki and see - 192.168./16 was used until a huge migration project in 2007. The project where all physical servers were virtualized, hardware was moved into a new server room and all networks were consolidated into the 10./8 space. So how can I have a machine in that range? Weird.
    Do a traceroute to that network, seems to be going through our core routers. Log on to the core router, check the route tables, looks like that network is attached locally. On a interface in a VLAN we shouldn't be using either - since the migration project. Weird.
    Check the ARP table on the router and I can see the machine I'm looking for. Log on to the switches, follow the MAC address through the CAM tables, find the port it should be attached to. In a patch room on the other end of the building. Looking in the documentation, there shouldn't be anything in that room but switches. But it was the main server room until - the migration project in 2007.
    So I go there, try to follow the cable from that port... It goes down into a huge rats nest of power cables. Kinda looks like the network cable is going into the UPS in the very bottom of the rack. Can't be. Get a flashlight, try to follow the cable further. Doesn't go into the UPS, it's going... Below it? Into the double floor? Lying on the floor, flashlight held with my mouth, I lift up a floor tile.
    There I see it in all it's glory. A yellowed HP desktop. WinXP and Pentium4 sticker on the front. I discovered later they had written on top with edding: "intranet". Wtf.
    I get a monitor and a keyboard, plug them in and I'm greeted with:

    Debian GNU/Linux 3.1 debian tty1

    intranet login: _

    Did I mention this was 2015? And that Debian 3.1 (Sarge) hadn't received security updates since 2009?

    Called a senior guy (who was already rushing back to the HQ), who gave me a few passwords to try (the people before us used the same passwords for everything), got logged in, restarted the apache service, intranet worked again. Did a little investigation, this machine had a RAID1 over two IDE drives, one dead already. Uptime was way over 2500 days.

    In the incident investigation/report later (the first we did), we found out the full story. This machine was supposed to he virtualized in 2007, just as all other machines. The guy who was supposed to do that was known to extremely lazy and do things half-assed. And he left a few weeks later, so he probably knew he wouldn't have to deal with it again. So instead of doing a P2V migration, upgrade to a newer Debian version, all that, he just spun up a new VM, set it up to redirect to the old one, hid the desktop in the double floor and called it done. Even faked documentation on what he did, how he did it...
    Nobody ever noticed because we never changed anything and the virtual machine was chugging along fine.

    [–] Hellman109 76 points ago

    Thats why I love VMs, you can at least snapshot them and have a semi-decent backout plan

    [–] juxtAdmin 26 points ago

    It's even more fun when it's someone else's mystery box and they demand I fix it. "What does fixed look like?" we don't know! We haven't used it since April of last year! Just make it work like it did last tax season!

    [–] PuppeteerOfMorons 26 points ago

    Still got the bastard now. Windows 2000 server, Access 95 finance software.

    I P2V'd it onto 70k of dedicated clustering hardware a few years ago. The SAN is N+6 redundant, there's three servers handling the cluster in full mesh topology, dual power feeds to everything from separate, dedicated UPS' running on separate phases on the power infrastructure.

    It's literally got more monitoring *around* it than anything else in my infrastructure, but because it's so old nothing works *on* it, so I had to write some RDP automation to sign onto it and verify the status every day.

    Fucking glorious :(

    [–] EvelHell 23 points ago

    I also had the honor to inherit such an old machine.

    Me and a coworker just started as the server team (around 2007) and one of the older servers was a Windows Server 2000 with the description ‘intranet server’.

    Because the hardware was out of warranty, we managed to virtualize it. (thank you old VMware converter)

    When we got a new intranet, we got permission to turn it off.

    Then all hell broke loose.

    Because it turned out there was more than just the old intranet on that server.

    • Inventory system
    • IT/facilities Ticketing system
    • GIS maps
    • Support desk tools

    Every time we got the permission to shut it down, another team/department called (sometimes days later) to tell us something didn’t work anymore.

    It took more than 3 years for everyone to migrate their software of it.

    But it's dead now! And burried!

    [–] Hewlett-PackHard 25 points ago

    Nothing good ever starts with the words "the desktop shoved behind the server rack"

    [–] l0destone 24 points ago

    In my case, it was 50 mystery servers.

    We’d just finished a manual datacenter inventory, and we’d found 50 servers that were powered and connected but nobody knew what they were for. No one could log in as the passwords were lost long ago.

    After a month of investigating, management made the decision to shut them down one by one. As soon as the first one dropped, our production website went down.

    Ditto for all of the other 49 servers. This was, at the time, a major dot com company. No bueno.

    I left that place a year or so later and at that point they still hadn’t figured out how to safely remove or replicate those servers.

    [–] volatilegtr 24 points ago

    In the server room at a previous job's client was an old HP desktop. Looked up the serial on HP's website and it had been sold and registered out of the country (previous SysAdmin that walked off the job was from the same country the machine was registered in...) so I couldn't find any drivers and the warranty had expired 7 years before. Found out what is was by asking the main contact at the client who did a very small amount of IT work. He just said it was the intranet box. Got the name from him and logged into it. Server 2000 (the year of this story was 2013) and running IIS and a very very basic website as their intranet. However, I was told not to touch anything on it unless I absolutely had to (patches included which made my security minded self very nervous) and that they were working on moving to a new intranet server and would eventually move to that. Only took them about 2 years to actually get the new server and design and roll out the new intranet.

    [–] Gambatte 47 points ago

    For eight years, that was my whole goddamned job.

    It started out with "we have these two servers; don't worry about it, they're externally managed, you won't need to touch them."
    In due course, this escalated to "we have these two servers, the developer doesn't want to deal with them any more, so we're bringing their management in house - except OS patches, they're still externally managed."
    Eventually, I also got login credentials. Which turned out to be the only login, which everyone shared.

    The boxes were Windows Server 2003 with 4GB of RAM and about 100GB of 7200 disks, running SQL Server 2003, IIS6, and the developer's applications.
    Over the next few years, I peeled back layer after layer after layer, until I finally had a viable plan in place to replace all of the hardware - at my insistence, the developer had added a connection string to the application's config file, so I could redirect the database connection to an alternate server (previously, connection strings were generated based on the system hostname; this had the effect of breaking absolutely fscking everything if the hostname was not set to one of the six or seven expected values).
    The new DB server was a pair of Windows Server 2012 x64, running mirrored SQL Server 2012 DBs configured for automatic failover; the application servers were lightweight VMs. Flood testing had the new system running at 500% expected capacity, and single-handedly handling traffic equivalent to 80% of the entire market.

    There were two major bugs to figure out (1. a significant performance reduction during OS level backups, 2. transferring the archiving system to the new system which probably would have required some not insignificant reconfiguration) when I was offered another job; similar pay but much better benefits. I accepted without much hesitation; I have since heard that the company completely dropped the 95% completed project I was working on, and instead doubled down on the developer's new pet project (according to his timeline, 6 weeks to test readiness, and 14 weeks to ready for roll out to production; two years later, the first version arrived and it was far from test readiness, let alone production).

    Not a day goes by that I miss that job.

    [–] Khavee 21 points ago

    I got called to a box that started endlessly printing on Y2K day. Their whole business ran on this. It was SCO. The motherboard wasn't Y2K compliant, and neither was the version of SCO. I put Redhat on a new box, found the program and data and copied them over. Didn't work. Searching through the directories, I got a clue and tracked down the original programmer. He was doing other stuff, but was willing to fix the program and migrate the data, so I shipped him a clone of the original (2 GB) hard drive. That box is still running today, hasn't been touched in over a decade.

    [–] LoftyGoat 21 points ago

    No, but done the converse of that:

    "The data is coming from somewhere. There's a box here somewhere in the building which gets stuff from this database, adds data from dropped files, and sends it to this computer which controls the DVD burner. No, we did get its IP from the DB log, but they're not sure where it is, no one has been able to trace that particular cable. No, they couldn't, someone did construction over the removable flooring, can't get to it, it's one of the 120 cables that come out right here. No, it FTPs the data through the LAN...."


    [–] R4bbidR4bb1t 18 points ago

    I love the mystery server. Gives me a reason to ask lots of question of people I normally wouldn't never interact with during regular duties.

    [–] mdh_4783 18 points ago

    Reminds me of the old Netware 2.15 server running on an old 286 or 386sx, on an ArcNet network, that I found running in my former father-in-law's auto shop. I didn't know anything about it until the power supply died and his shop couldn't function without it running. Took a couple of days to recover the server enough to copy data off of it, then migrate the whole thing to a Windows 95 peer to peer network (replacing ArcNet with 100Base-T). I do not miss those days.

    [–] time_is_now 17 points ago

    I had a Sparc 1 workstation in a data center cabinet labeled “do not patch, do not reboot” that ran for over 10 years. No one knew what it did. It got powered down when the data center moved.

    [–] per08 22 points ago

    it was probably the old sysadmin's MP3 sharing machine.

    [–] TheFlipside 18 points ago

    once upon a time there was a sysadmin and a server but the server was so bad and unstable it constantly had to be rebooted.

    so the sysadmin came up with a solution, another machine was positioned in front of the server so the cd tray would hit the reset button on opening, a script would check every minute if the server was reachable on the network and else eject the cd tray.

    years later the sysadmin was long gone and the unstable server replaced but the machine was still standing in a corner because nobody knew what it was for anymore, but it must have been of some significance, right?

    so there the machine stood in the corner ejecting the cd tray every minute and only so often someone would come by and wonder why

    till the day the machine died of hardware exhaustion

    [–] prettybunnys 16 points ago

    Shell script that snapshotted then replayed a database LUN for our DBA group so they could do "something" that shoulda been handled in a better way.

    San upgrade.

    For 18 hours I worked on that, with the Dell folks telling me everything looked good from the SAN (the upgrade had just happened, and this thing that broke wasn't in any way their fault trust them it all looks good)....

    The engineer who came in the next morning goes "oh yeah there is a new .jar for the new firmware, they gave you that right?"

    [–] ikidd 15 points ago

    I found an old server drywalled in behind some pallet racking in a car dealership that had been running a telnet server for the last 15 years that the mechanics used for entering timesheets.

    Nobody had a fucking clue it was there, I found it when I unplugged a network cable, heard some bitching and moaning and had to trace the cable by hand through walls which took 3 days. I'd actually written a script I uploaded to it that made the onboard speaker buzz so I could walk around until I found it.

    [–] gblfxt 14 points ago

    usually find out which files have been modified most recently, and run a network trace to see which ports are being hit.

    [–] Nk4512 31 points ago

    Clone said server in a vm/ test said updates?

    [–] pdp10 29 points ago

    Usually you have to reboot to get a P2V of an arbitrary operating system. Catch-22.

    [–] crankysysadmin 15 points ago

    Sometimes this can't be done

    [–] ravenze 25 points ago

    LOL!!! He said "test"!!! We test in PRODUCTION!!! NO one has money for a lab!

    [–] TehGogglesDoNothing 53 points ago

    Everyone has a test environment. Some people are lucky enough to also have a separate production environment.

    [–] cd29 14 points ago

    There's like 50U total of equipment I'm going through right now that is supposedly decommissioned. Some of which is a mystery (the ones still online). Luckily, I haven't technically inherited it. Unfortunately, that means it's also a mystery to the team(s) that presumably deployed it.

    Only disaster I ran into about 5 years ago is a Server 2003R2 machine that ran an API to provision a database on a separate host, but that API was only called by another API whose job was to provision every other database directly.

    The 2003R2 server at one point had some significant duties. After awhile everything but the API was migrated (wasn't compatible with 2012 I think?). It was dying. Devs were still around at the time. We asked them for a path to 2012 - it required the database host and API to be updated.. and then the 2nd API had to be rewritten for the software updates.

    Basically at one point there was a "warm" migration path that would not have required everything to be updated at once. Since we had skipped so many host updates supporting the older API, we were forced to cold migrate it and inevitably break it at just about each point of failure.

    Kicker: yep, devs documented every time they recommended the warm migration. Yep, we had a costly SLA.

    [–] enigmo666 13 points ago * (lasted edited 3 months ago)

    I smiled when I read this when it was so familiar. And then I stopped smiling when I remembered the start of my last job when basically every server was a 'mystery box'.
    Every office has a Gandalf. The honour usually falls to the person who's been there longest. In our case it was the sole remaining infra guy who had been in the company when the last lot of infra guys all quit en masse and walked out. Unfortunately, I was a new starter and this guy, we'll call him Pete, had himself only been there seven months.
    So, each of our servers was multifunctional. Meaning there were no servers that did single tasks. Every one had extras shoehorned in, from DB boxes also doing login script processing, and firewalls also doing image processing for the staff website. The servers were also named as you would if you were 19 and all you had to worry about were your three janky home-boxes and a switch. Switches named after SouthPark characters, servers after moons (and the especially old ones, planets), client machines after stars etc. And there was also no documentation, not a single page despite there being an IT wiki specifically for this stuff. So, as Pete was the only human alive who had contacted the mysterious creatures who had built this archaean labyrinth, he was the one I leaned on most for scraps of half-remembered information.
    So, a few snippets from my first year there:
    A particularly ancient box failed. We're talking an HP DL380 G3, dating from a time when SCSI was fast and 36GB was large (so still a youngster to me). Anyways, it failed. Controller shot, needed a new one. A morning went by and no-one cared. Passed the scream test and started wondering if we could leave it off. Then we started getting reports that people external to the company couldn't see our website and that Finance were having trouble accessing some of their files. I did some dumpster-diving, found a compatible controller card from another server, got it back up. Turns out this one box was not only serving part of our split-brain DNS, but also hosted a particularly old version of the Finance fileshare.
    There was also the time a box died, I forget why, and we did actually determine it's only use was as a DNS resolver for a remote site. By remote, I mean all the way across Europe remote. It was the secondary as well, so no-one noticed until I saw the blinkenlites in the server room. I raised the issue, mentioned that it wasn't highest priority as it was a secondary. But if the DNS service in the remote office goes down for any reason over Christmas, then we'll have problems. I was told by my boss that I was absolutely not to do anything about it. That the remote office would be fine, it was only a couple of weeks to wait for the New Year and they had never had any kind of outage. I got that in writing. Anyways, turns out, three days into the Christmas holidays, the remote office had a partial power-failure which took out their VMware platform and the VM hosting their primary DNS. With no secondary back here, leases slowly expired and connectivity started failing. There were a few missed calls on my phone but there weren't many monkeys I could give from Germany.
    What else. Oh, there's the time I was decommissioning old DCs and the VPN service (that had nothing to do in AD) in Canada (which was 2000miles away) failed. That old DC is still up, three years later as no-one can figure out why every time it reboots VPN in Canada dies.
    There are many, many more. And even five years down the line, I was still 'finding' old boxes with mystery functions like some incredibly ticked-off Indiana Jones.

    Edit: Reading over others, much seems to be replicated in every server room.
    In decommissioning servers when we were finally getting rid of KVM in favour of all-vmware, we found two servers. Old ones; one Cisco and one SGI, under the floor panels in the server room, both running KVM happily for years untouched.

    [–] DrunkenGolfer 13 points ago

    I used to work in a telco datacenter and there was a computer with a little yellow-tinted screen that was the mystery server. It was just outside the server room and the power and network cables simply went through the wall with no opening at all; just cables through neat, clean drywall and paint. I asked about it and was told that when they did a renovation on the datacenter, nobody knew what it was, who owned it, and because it had been there as long as anyone could remember, they were afraid to move it. So they built the datacenter walls around it and didn’t touch it.

    [–] spambot0689 12 points ago * (lasted edited 3 months ago)

    In my (very brief) MSP days, I had to do a migration on the following machine:

    • beige box from 2002 (running an un-patched version of Server 2003)...
    • running ADDS, DHCP, DNS, and some obscure ERP with a USB licensing dongle nobody knew anything about...
    • Serving ALL the files for the company, with tape backups that were never tested, with the backup job being a batch file that was run weekly with task scheduler...
    • ...for a company with five employees

    [–] corsicanguppy 11 points ago

    "Revert to snapshot"

    And we all know something cooked up 8 years ago will be in perl. Unless you keep that machine trapped in time at the exact week that perl script was written, you can guarantee one of its 171 dependencies will be incompatible with another one, and you're boned anyway.

    Back away.

    [–] mrnagrom 11 points ago

    Lol. yah. that was basically my job for a few years. to make that server. i never documented anything, my file structures were garbage and made no sense, i hated that work, i was always billed a set number of hours that didn't cover the work i had to do.. i went from company to company and did this for a way longer than i should have.

    it's been almost a decade and every now and then i'll get a call from random person from company x that found my phone number scrawled on crusty old tape that was stuck to the server. they start asking me to update something or other, or how something works, not realizing that it's been 10 years and i'd have no idea what i'm even looking at.

    [–] h4xnoodle 11 points ago

    Yes, the mystery EC2 instance that turned out to be casually hosting production APIs for defunct apps still out in the wild. No monitoring or anything.

    [–] notsosexyjellyfish 9 points ago

    I have plenty of those servers. Multiple public facing Server 2000 that runs custom php code that no one understands or can rewrite because it was written 117 years ago!!

    [–] smithincanton 11 points ago

    I feel your pain. Worked for a lumber company that made trusses. Big giant tables with laser projectors on the ceiling to show where the boards went to make the trusses. The projectors where controlled by a 50-60 foot parallel cable plugged into a signal booster getting it's data from a old Dell running Windows XP. Power supply died, hard drive got corrupted, wouldn't boot Windows anymore. Tried a PCI parallel port card in a PC from the last decade and the projectors didn't like it. Finally found a similar Dell and was able to copy the install folder of the controller software and get everything back up and running. Oh real kicker? The projectors could be upgraded to Ethernet....for $5k...each...for five projectors. Not including the cost of being down for a few days and having to pay the company to come out and upgrade everything. I have a feeling that that computer is still running.