So, i decided i was gonna make a cluster, and lo and behold i got 3 core 2 duo systems and an ethernet switch and its all networked together, but i can seem to find any useful documentation on how to use the cluster in (parallel?), as to distribute the load of one comand across several systems. so heres what my setup looks like
1. all have linux, (2 have mint, two different versions and one has lubuntu)
2. all are wired together over lan through an ethernet switch
3. i can access both of them from the main system through ssh
4. they all have the same username and password
5. same cpu arch on all
6. 2 different ammounts of ram 3gb and 1gb... 2 have 3gb and one has a mesly gb
so what i wanna be able to do is run a command like say, (paralell comand here) ping 92.876.804.123
or something tottally different like rendering in blender. either way, i have yet to find out how to do this, i found stuff on google, but it was all at least 3 years old or it didnt make any sense at all.... so i anyone could help me thatd be great!
also, if anyone knows how to use blender in terminal... hahaha
Clustering is a subject WAY too deep to properly bring you up to speed on in a simple forum reply. I strongly recommend for what you are wanting to do, you need to do some googling and reading...
#1. Linux iSCSI target, and initiator. For any cluster you will need common, shared storage, unless you are fat with money, iSCSI is a readily doable method for most of us.
#2. Linux HPC Cluster. I would give you pointers on this, but you are going HPC, I am working with HA clusters. Both clusters, but they do different things. I am aiming for 100% uptime of my resources, you are aiming for fast as hades processing. They are done differently... Years ago I put up a 16 node RHEL / Beowulf cluster for a project at work, not sure if that configuration is still supported though...
Typically speaking, on a cluster / cluster node you need.
#1. Reasonably similar hardware, and operating system loads. 100% identical isn't hugely important, but variations in hardware can cause problems.
#2. 4 network interfaces each on their own network. 1 public to connect to your LAN, this can be 10/100 no problem. 2 storage / SAN, these should be each on their own switches, switches uplinked to each other. All gigabit hardware here. You MUST have fast access to your storage. In mine I used channel bonding to give me 2gbps speed across the networks as well as provide automatic failover should a path die. The last isn't 100% required, but a heartbeat network can also go over the SAN, or the LAN. I like to have mine on its own network.
also there are quite a few relatively inexpensive NAS boxes that can do iSCSI and walk you through creating the LUNs and getting the initiator installed/running can be easily done as there are many tutorials like dbhosttexas mentioned. Im currently using a 4 disk synology device, not clustering but still more reliable than simple drive mapping IMHO
Yeah, you can do a cheap NAS box, I opted for a filer head box with an external RAID array and a bunch of disks configured RAID 5. This was for capacity purposes. My cluster houses VMs, and stores videos for streaming within my network... So my demands were somewhat high, a suitable NAS box just wasn't to be found...
IF you opt to go with OpenFiler to build a filer head / NAS controller the latest build REQUIRES 64 bit CPU and a minimum of 4GB RAM...