Moving from Windows to Ubuntu - Have some questions

Hello,

My setup is currently on two computers,
10x18tb in PC1
7x18tb in PC2

Im GPU farming on both pc’s aswell.

Both of the computers are running windows today. My plan is to have one dedicated Chia PC with Ubuntu and the other is dedicated on GPU farming with Windows.

So my questions:

  1. Will Ubuntu autodetect alle the harddrives if I if change all 17 harddrives to one PC? If not, is there any “easy” fix?
  2. What should my PSU be like for starting up 17 disks at the same time?
  3. Any other tips and triks? I have little to non experience with Ubuntu and linux, but I love to learn new things :).
  1. Yes, but you have to “mount” them, as in assign a drive path where the files will be. And set Ubuntu to do this automatically at startup.

  2. Any decent atx psu will do, as long as it has 20A on the 5v rail. Having a psu with 25A is better if you ever want to expand a bit

1 Like

How much does a HDD use to spin up, 25W ish?

So I would have to have a 1 000 - 1200 W PSU if im looing to get a 36 bay ATX case?

Edit:

And Im able to use all the plots ive already farmed in ubuntu right? :slight_smile:

Are you asking if you will be able to use the plots that you already created on your Windows box… to now use those plots on your Ubuntu box?

Presumably, your Windows box formatted your plotted drives as NTFS.

Ubuntu will work with NTFS formatted drives. However, it will be slow, compared to a native Linux file system, such as ext4 (something to do with “user space” vs. “kernel space”, or some such thing – can’t remember exactly).

I once used MX Linux to plot (via madmax) to an NTFS formatted SSD temp drive, and the performance was significantly worse. I then created an ext4 partition on that same SSD, and the performance was back to normal.

So it might be advisable to convert your NTFS formatted drives to ext4 (or BtrFS).

Is that necessary, if used strictly for farming / GPU farming? I do not know.
Just mentioning the above, because it might be an issue that you should know about.

Can you convert from NTFS to ext4, without losing your plots?
Someone else will have to chime in. But I doubt that you can do such a filesystem conversion, without losing the data on the drive. I am pretty sure that you would have to format each drive as ext4, resulting in losing the files on that formally NTFS drive.

You will probably need a spare drive, formatted as ext4, and copy all of the plots from an NTFS drive to the ext4 drive.

Next, format that same NTFS drive to ext4 (so it will be empty), and copy plots from yet another NTFS drive to that ext4 drive, and repeat.

The above will take a fair amount of time, and will require you to have a minimum of 1 spare 18TB drive. You might have to lose your plots on one of your 18TB drives, to create a spare ext4 formatted drive.

1 Like

Not really, I had a 650W (corsair rm650x) powering 22 drives and it didn’t even break a sweat. the spin-up can use something like 20W per drive, but this is for such a small time (milliseconds) that a good psu can handle a bit of a peak for such a short time. The most important thing for a PSU when connecting a bunch of drives is the 5V rating, even a lot of 1000W+ psu’s only have 20A on it.
Also if you use Sas hba and/or expanders you can sometimes set the drives to spin-up in sequence instead of all together.

In any case you can check the drive’s spec sheets, or manuals and see all the power ratings for 12V and 5V.

2 Likes

Thanks alot for the answer :). Ill guess I have to start the convertion process ;p

At the moment I have all my drives in Fractal design 7 XL case, but my goal is to move them over to a Inter-Tech rack case that can hold 36 drives.

Any tips on a good HBA controller that lets me connect to the SAS port to the hot swapable disks? (Is it an HBA controller I should but or something else?)

How much for the chassis?

Ok, Ill try to stick with the Corsair HX series. Ill guess that should be ok?

“The most important thing for a PSU when connecting a bunch of drives is the 5V rating, even a lot of 1000W+ psu’s only have 20A on it”

Dont really understand the 5V and 20A stuff ;p,

This comes up when Im in the Corsair HX1200 spec sheet: (Utgangsspenning = Output voltage)
image

My “endgame” plan is to have 1PiB is, so 2x Inter-Tech 4U-4736, so Ill guess its ok just to buy the 1200W PSU right away? (One PSU for each case).

I know that there are other server JBODS that are much better and cheaper, but they are to complicated for me tbh ;p, they make to much noise aswell. In the Inter-Tech case im can put all noctua fans, that will keep the noise level down :).

950$ ish for the chassie, know thats alot but I dont like the server ones :).

you’re better of with their 4F28 case :

https://www.alternate.de/Inter-Tech/4F28-MINING-RACK-Server-Gehäuse/html/product/1778205

more bang for the buck :slight_smile:

1 Like

You need to be careful about that. Those JBODs that have 4x3 in 2U (in your case 4x6 in 4U) are originally designed for 2.5" and 3.5" SAS drives that were much smaller (thickness) than any current 18 TB drive. What it means 4 drives next to each other in their caddies will not have much if any gaps on their sides (looks like the upper part of the side wall of the caddy may let some free space). Also, 3 drives in 2U or 6 in 4U including those caddies will almost take the height of the enclosure. What it means is that the fans you need are a high static-pressure fans that can pull a lot of volume through small cross sections. That enclosure has such fans, but there is no Amps marking on those fans. In my case (Dell sc200, those fans draw ~25 W max). When you compare such fans to Noctua, you will see that the CFM is ~5x higher than those from Noctua, plus most likely those “quiet” fans are not high static pressure making it even worse. This is where a confusion between noise and air volume / static pressure kicks in.

Another thing is that those enclosures are mostly deployed in NOCs, where ambient temps are kept at 18 C or less.

In my case, those sc200 had originally 2 PSUs each with 2x 80mm fans stacked one behind another (to increase static pressure, not discharge area). So, it was as you said really noisy. I dropped one fan and am controlling fan speeds based on the HD with the highest temps (usually those in the middle). When the room cools down, the noise is bearable, but not to have it there when having a conversation or so. However, the enclosure is highly sensitive to ambient temps, and just a couple of deg C in ambient is causing jumps by much more than 100% in fan speeds (from 6% of max up to 35% or so, so far). At around 20%, it is becoming a nuisance if you are in that room.

The obvious problem for me would be the fan speed in that enclosure, especially what controls it. If there is just a thermistor on the backplane, that will give roughly an average temp, not really the max temp from your set. So, most likely you may be baking some of those drives if you switch to Noctuas.

1 Like

I haven’t seen a bunch of drives with molex plugs in a very very long time…

1 Like

This is always in the side of any psu, and should also be in the manual

1 Like

you mean the fan connector?

On the hard disks the connector on the right side of the Sata data and power connector on the left.
image

where did you get that pic from?

Inter-Tech 4F28 MINING-RACK, server chassis black (alternate.de)
.

Lol, I get this pic when I click that link :slight_smile:

image

1 Like

Thanks alot for the response ;p, so im better of keeping my drives in a Fractal Design 7 Xl to have some more room?

My HDDs are currently hovering around 35-45 C. Thats ok aint it?

Are u referring to the 4F28 case or will I have the same problem with the 4736 case?