Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrate/recycle/sell servers #98

Open
ck2qsuZT opened this issue Jan 17, 2017 · 14 comments
Open

Migrate/recycle/sell servers #98

ck2qsuZT opened this issue Jan 17, 2017 · 14 comments
Assignees

Comments

@ck2qsuZT
Copy link

We currently have:

  • Two old Intel Atom servers in use (Web server)
  • One older Intel Xeon server in use (File server)
  • One relatively new and very powerful Intel Xeon mothherboard/CPU in storage.
    I'd suggest putting the new Xeon motherboard/CPU into a new server and then migrating the atoms to one of the Xeons
    From there we can sell/recycle the two atom boards.

The question is, which Xeon should be the file server and which one should be the web server?

We should also move our web/mail server to HE so we still have our mailing list along with the information from our website even when Cloyne's internet is down.

Another interesting thing might be to sell one of the Xeon servers in favor of an openPower server.

The powerPC CPU arcitechture is of comperable computational power to x86/amd64(from here on simply refered to x86) while being both cheaper and more open. It also has better virtualization and has the potential to fight intel's current server architucture monopoly. especially now that IBM recently made the CPU arcitecture open source so openPower and power8 are currently interchangable. I would personally rather support openPower than x86 but probably better to use the hardware we have than to buy new things. Also, powerPC doesn't have as good of support as x86 due to x86 being about 95% of the server market so SALT wouldn't work unless it is virtualized to x86 which adds some complications =p.

some information on openPower

http://www-03.ibm.com/systems/power/hardware/
https://www.crowdsupply.com/raptor-computing-systems/talos-secure-workstation (didn't recieve funding but the information on the CPU architecture is still relevant/interesting)

@ck2qsuZT
Copy link
Author

Probably much better that the new XEON goes into the colocation web server since our file server is good enough and the server at our colocation is a bit more important.

@mitar
Copy link
Member

mitar commented Jan 27, 2017

I do not think we should really have a server at colocation. :-) It is much too hard to debug that.

@ck2qsuZT
Copy link
Author

We can test it and see how it goes but I also didn't realize that it's a 2 hour subway ride away so maybe not unless we can figure out a good out of band KVM...

@ck2qsuZT
Copy link
Author

Or maybe I can just host one of my personal servers there and I make an lxc container or virtual machine for Cloyne. Probably not the best long run solution though since it's potentially only good for as long as I live in Cloyne, And we would definitely need to upgrade to a higher power tier at our collocation cabinet.

@mitar
Copy link
Member

mitar commented Jan 27, 2017

So I think the file server should stay in the mesh network, because there is no reason to consume the Internet connection with those transfers.

So the only reason why we would want anything on collocation is for web site and mailing lists to be accessible also when Cloyne uplink fails, yes? I think this is much better fixed by creating a BSC mesh network and when Cloyne uplink fails, it is rerouted through another house. Both Internet for Cloyne, but also web site and mailing lists.

This is why I do not think we should really put something on the collocation.

But I do agree that having a good server in Cloyne (read: capable of running nodewatcher and other CPU heavier services) would a good thing. But let's maybe first install nodewatcher on server3 and then decide. Because everything will be configured through Salt and Docker it will be very easy to move around, just change configuration and run salt and this is it.

@ck2qsuZT
Copy link
Author

ck2qsuZT commented Jan 28, 2017

Power outages are also an issue but maybe we can get a decent UPS. If that's the case then we can just reuse Server3's case, populate it with the new Xeon, and sell everything else (server{1, 2} and the CPU/motherboard from server3) ECC RAM might be nice but probably not needed at all. Also, might not hurt to have a mirror in rochdale both so we use the northside/southside connection (whatever it be) as little as possible and for redundancy.

Personally I'm almost leaning towards selling everything at which point we could get two ASUS KGPE-D16. these have better price/performance ratios but would consume more power. They also have libre firmware via librecore =)

We could get two boards for about $600 including 16GB RAM and 4 CPUs total (each with 8 cores)
Disclaimer, I am personally supporting the librecore project financially.

@mitar
Copy link
Member

mitar commented Feb 2, 2017

Power outages are also an issue

This happened once to twice a semester for me. I would not complicate with that either. We just have to assure that everything gets restored (all services) once servers get back on.

I do not think we will get a lot of money for any of this hardware.

But, talk to the house and let's see what they think.

I would suggest that maybe first you get them to realize the importance of all this hardware. :-) So get them to use it for what we already have.

@ck2qsuZT
Copy link
Author

ck2qsuZT commented Mar 6, 2017

I'm wondering if we really need this at this point, I feel like server2 could be moved into server3's case and turned into a file server, server3 could take the place of server2 and then processing power wouldn't be as much of an issue since server3 has a half decent CPU

@ck2qsuZT
Copy link
Author

ck2qsuZT commented Mar 6, 2017

Also, update, the house voted to keep the makerspace computer as a makerspace computer and I said I would come back later with more information on if we need a new server and if so which one to get.

@mitar
Copy link
Member

mitar commented Mar 6, 2017

I would just leave things as they are. If something, just migrate Docker containers from server2 to server3 as needed.

@ck2qsuZT
Copy link
Author

ck2qsuZT commented Mar 6, 2017

So let's test nodewatcher on server3 before committing to a new server?

@mitar
Copy link
Member

mitar commented Mar 6, 2017

I think that's the plan.

@ck2qsuZT
Copy link
Author

ck2qsuZT commented Mar 7, 2017

Okay, just making sure since I've somehow managed to mix multiple issues together and it's getting a little confusing.

@mitar
Copy link
Member

mitar commented May 18, 2017

So I think current server1, sever2, server3 work good enough.

What is the plan with the big server enclosure in the network room? To put makerspace hardware in it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants