I think the best method is to use a shell script. Something like this:

```

ls path/to/files|xargs scp -R remote.ssh.server:/path/to/backup && rm path/to/files/*

```

This way if it gets stuck, it won't do the delete, and you can run it again after you deal with the error. If you used a remote file mount, anything at all, sshfs, SMB, whatever, you could take advantage of `cp -rfvpu` and preserve the permissions, and it would not copy the file if it was the same size and creation date as the original.

To be honest, scp is crappy.

You could also use `rsync` it has this functionality you seek, all in one, I believe.

Reply to this note

Please Login to reply.

Discussion

I really appreciate this suggestion!... but I have to ask... how in the fuck do I need to have a shell script to do this? Isn't this the whole point of cloud storage? (I'm not at all annoyed with you btw, I just can't believe this functionality requires terminal commands or bash scripting)

The confusion is around "storage" as in "offsite" versus "synchronised".

It's not the same as moving files around your disk, you can't just change the directory they are attached to, they have to be copied, and then it's optional if you delete them.

Your use case is not the one that relates to the expression "sync".

Also, is there really any reason why you don't just do it with a USB disk?

And the command line is the simplest way to do it. The GUI is designed for people with high time preference, to put it succinctly. Computer programmers and administrators have naturally lower time preference.

I use the Nextcloud with bind mounted smb drives.

But I steal your shell script 😁, its simple and it seems effective