I am new to this forum. Currently, I built an all-in-one plotting and farming workstation using RTX 3060, 256GB ram and 1TB nvme. I get 6min per plot on Windows and 3min per plot on Linux. As far as I know, Cuda computing should perform similarly on both Windows and Linux. Do you guys know what the cause is for the difference and what can be improved on Windows?
I prefer Windows to Linux because I run proprietary softwares (CAD, simulation, etc) for research besides farming.
can you provide script command? i’m plotting on windows 128gb setup with 1080ti, my time is 5.5 mins, should be twice lower with 256gb. plotting performance of 1080ti and 3060 are hardly noticeable.
performance is foundation, not patch as Microsoft does ever since Win 1.0 - Storage API is joke…like MS doesn’t understand SSD tech is here since 2008 or so ROFL
Linux will always be faster in IO, computing
Do not bother about Windows…I do 180s plots on old server with 512GB RAM and Debian
But you will also lose performance on your video card, when you try to run everything on one machine!
Why not build a second "“second hand” machine only for plotting and farming? You don’t need a 3000er card, a 1070 1080 will do the job.
The farming cost on GPU is quite low. I did both farming and plotting at the same time with my 3060 and it is perfect fine. I have about 700TB of C6 plots.
My goal is to have an all-in-one server that do farming, plotting and other stuff like CAD 24-7. It saves costs and space. Plus, I can do everything remotely.
I had a second plotter machine before, I had to configure IMPI to turn it on/off, then configure 10GB Ethernet to move plots around. And that machine is useless to me except for plotting.