4

Is there a way to use my main hard drive which is a NVMe pci3 SSD as a cache?

The use case is this:

When I sent large files from my desktop computer, to my Ubuntu server 21.04, I want my Ubuntu server to first dump the files I'm transferring to the NVMe drive and then to its intended destination to one of my mechanical 3.5 Hard drives.

Reason? To max out my 10 Gbit network connect I have between the two devices and to also (future-proof) max out 25 Gbit network connection when I upgrade.

Is this possible?

So basically how it goes is this:

I have sent a 10 GB video file from Computer A, and I want to transfer it to Server PC at location /somelocation/mechanicalHDD/videos and the way I envision it working in the background is that my server PC will first receive the files from my network connection directly to my main NVMe SSD drive (/home/cache) and then my server transfers from /home/cache to /somelocation/mechanicalHDD.

My main PC doesn't have to wait until my home server transfers it from /home/cache to /somelocation/mechanicalHDD. It only cares that I have successfully transferred it somewhere in my server for quick transfer speeds.

Pablo Bianchi
  • 14,308
  • 4
  • 74
  • 117
Jono
  • 505
  • 3
  • 7
  • 19
  • 1
    ZFS allows you to add your SSD as a cache device for your HDD(*needs to be in ZFS as well*) by configuring what they call a pool ... That being said, why not run a script to check with like `inotify-wait` and move whatever it finds on the SSD to the HDD and you can copy straight to the SSD from your desktop? ... It's more reliable and faster than any caching workaround you might implement other than using ZFS – Raffa Jul 15 '22 at 14:15
  • You can use `rsync` to transfer to the SSD location, then with `&&` if everything went OK move from SSD to HDD location. – Pablo Bianchi Jul 15 '22 at 16:51

2 Answers2

5

ZFS allows you to add your SSD as a cache device for your HDD (needs to be in ZFS as well) by configuring what is called a storage pool ... There are other caching solutions as well.

That being said, caching is not actually what you want because your copying command/program will still wait until the file has been fully written to the destination directory i.e. caching role is very minimal in ending the copying process earlier.

To achieve your requirement:

I have say a 10gb video file from Computer A, and I want to transfer it to Server PC at location /somelocation/mechanicalHDD/videos and the way I invision it working in the background is that my server PC will first receive the files from my network connection directly to my main nvme drive (/home/cache) and then my server transfers from /home/cache to /somelocation/mechanicalHDD.

My main PC doesn't have to wait until my home server transfers it from /home/cache to /somelocation/mechanicalHDD. It only cares that i have successfully transferred it somewhere in my server for quick transfer speeds.

I would suggest that you just copy files straight to your server's SSD and run a script to check and move whatever it finds on the SSD to the HDD ... This more reliable (files are permanently saved to the SSD in the first place) and faster (will finish copying process from your desktop to the server faster than write-caching to the HDD) than any caching workaround you might implement other than using ZFS ... the script would be as simple as:

#!/bin/bash

source_d="/home/cache/" # Specify the source directory.
destination_d="/somelocation/mechanicalHDD/videos/" # Specify the destination directory.

inotifywait -m -q -e close_write "$source_d" |

  while read -r path action file; do
    echo mv -n -- "$path$file" "$destination_d$file" # "echo" is there for a dry-run(simulation) ... remove it to enable moving files
  done
Raffa
  • 24,905
  • 3
  • 35
  • 79
  • 1
    What is `$action`? – Pablo Bianchi Jul 15 '22 at 16:37
  • @PabloBianchi `inotifywait` returns three things **1.** "the working directory", **2.** "the action(*event*) done on the file"(*i.e. create, modify, read ... etc.*) which is not used in the script as only the event `close_write` is specified but it's there as a placeholder and **3.** "the filename" ... thanks for the edit :-) – Raffa Jul 15 '22 at 16:46
3

It looks like bcache would do what you need.

Here's an answer that, while a bit outdated, will give you the basics:

How to setup bcache?

There's also Arch Linux's excellent documentation as a reference.

roadmr
  • 33,892
  • 9
  • 80
  • 93
  • 1
    Nice pointer. Keep in mind that bcache tries to avoid caching sequential writes, though. You probably need to `echo 0 > /sys/block/bcache0/bcache/sequential_cutoff`. See https://www.kernel.org/doc/Documentation/bcache.txt. – Guido Jul 15 '22 at 21:47