Ever heard of cabinets with cooling ?
I don’t know what planet you are on but the Internet connection and uplink is not fast enough in a house to be able to handle what they actually do.
Ever heard of cabinets with cooling ?
Yes it is for having shit hosting for their servers in the first place and not deploying backup/secondary systems as soon as the issue popped up.
I have a server, but servers with a "cabinet" cost a fortune (Which I also have)
After me and apoc committing our weekend on auth servers (literally committing i haven't slept since 40+ hours and been sitting in front of computer trying to fix auths), seems like we nailed down all issues. We have increased the number of auth servers from 2 to 4, they all been working stable since 5 hours.
We have discovered that we started hitting MySQL limits that weren't even documented after MoP launch and the increase on active sessions.
Since we have our servers working fine now, in next two days we are planning to setup the load balancer and our new infrastructure that is highly scalable and that will hopefully last for a good long time.
There's a reason servers aren't operated from residences. You should NEVER operate a mission-critical server without redundant networking, power, and cooling. This is why data centers have multiple providers, load-balancing routers, UPSes, generators, HVac, etc. It's entirely possible to have 100% uptime and stable operation as many web hosting providers do (sic 99.99% uptime), but without proper configuration (in general) these figures are certainly not attainable. The difference here is the lack of staff! We cannot expect 100% uptime from a few people managing servers that take so many requests.
It's easiest to think of it like this: You buy a new computer, it's fast for a while, but you're negligent in keeping it maintained: you've installed malicious programs, downloaded a bunch of media, and when you're using it have 50 programs running. At first, those 50 programs run perfectly fine, but as the crap builds up, maybe you can only run 40 of those programs at once. But hold on, if you had kept your computer maintained, this wouldn't have happened! A server cannot be maintained like a regular computer: the data is dynamic, and constantly grows based on user demands. Now, maybe some of this data can be cleaned up, but eventually the servers reach their capacities. This is when more servers are added to the "cluster" so-to-speak.
Let's assume that the Buddy team uses MySQL for the DB backend and Apache to serve HTTP requests. It's safe to say that it would be difficult to predict things that affect load averages from the respective daemons. Just like with WoW, this software to some extent isn't always problem free. Herein lies the main issue: server administrators cannot be solely held accountable for issues that may or may not occur. Of course, building on my previous simile, proper maintenance can help, but it will never remedy these issues altogether. Maybe on the average day this website gets ~1 million views. Someone on a popular news site posts a link to HB and maybe for a few hours this amount of views increases to ~24 million / day. Is it worth it to spend all that money to add more servers to accommodate this short burst of traffic? NO!
The solution here is flexibility, but more importantly redundancy. Cloud servers could be added dynamically and paid for by the hour in situations like this. Low-end high-storage servers could be deployed to effectively backup databases and other data. With preexisting infrastructure though, there's no telling how much work could be necessary to move to something more stable and flexible. This is where the statement, "Only time will tell," is best applied. Two people doing ten people's jobs takes time.
The gate for spammers are open.
SEE HERE HOW TO SOLVE THE PROBLEM OF AUTHENTICATION
http://www.thebuddyforum.com/buddy-...official-auth-server-issues-23-10-2012-a.html