I recently looped back around to working on my blog with the intention of finally adding environment support. Originally the site was all there was and I was working on the live copy, but now it is aware of the environment and I am able to do sweeping changes on a private server without messing up the live server. This project however is the first one I am using DigitalOcean's Spaces product - an S3 compatible (mostly) object storage - and so I needed to copy a snapshot of my existing bucket to a new bucket just for my development needs.
For example, any of the images uploaded to blog posts get put into the Spaces bucket, lovingly called pegasusgate-upl and my goal was to clone it to pegasusgate-dev for me to work on privately. Searching various keywords you quickly come across DigitalOcean's own documentation that tells you how to do exactly this with an application called rclone. Their instructions have you downloading and extracting an old version of rclone and of course I was like, screw that, apt-get install rclone and the problem here is things have changed, so the directions as written by DigitalOcean no longer work as advertised, and I refused to install rclone any other way.
Note, this could also help you if you need to transfer files from one bucket to another for ANY reason, like moving regions, or adding redundancy.

Install rclone

1) sudo apt-get install rclone (or how ever you install crap on your distro)
2) run rclone once to see its command list... also this makes it generate a config directory we are going to use.

Configure rclone

If you ran rclone once you should have a new folder in ~/.config/rclone and inside this folder you need to create rclone.conf. And put the following contents into it.
[spaces-<my-project-alias>]
type = s3
env_auth = false
access_key_id = <your spaces access key>
secret_access_key = <your spaces secret key>
endpoint = <your spaces region>.digitaloceanspaces.com
acl = public-read
It turns out what is broken in DigitalOcean's documentation is the ACL property in that config file, they tell you to set it to public, but that no longer works as it should now be public-read. To test that it is working you can run rclone listremotes which should then show you what buckets are available, and then rclone tree <bucket> to list its contents.
Note: if you need to transfer between two different regions, you need to add two configuration blocks to your rclone.conf file with two different aliases, something like spaces-project-src and spaces-project-dest
bob@web1:~$ rclone listremotes
spaces-pgate:
bob@web1:~$ rclone tree spaces-pgate:
/
├── pegasusgate-dev
│   ├── 3598fed6-137f-4828-99bf-2182fc54834d
│   │   └── 1eb1412f-0f50-6592-88f6-158f0c5b2ea0
│   │       ├── image.jpg
│   │       ├── lg.jpg
│   │       ├── md.jpg
│   │       ├── sm.jpg
│   │       └── th.jpg
Note: it was important to include the colon (:) after the spaces alias on that tree command. Without, it will do nothing.

Run rclone For Real

So... copy one to the other already...
rclone copy <source-remote>:<source-bucket> <dest-remote>:<dest-bucket>
rclone copy spaces-pgate:pegasusgate-upl spaces-pgate:pegasusgate-dev
And when it finishes you should have two buckets that look the same but are different. Now my dev server can reference the pegasusgate-dev bucket and everything will be fine.
Note: Most documentation examples tell you to use rclone sync instead of rclone copy and when I did that, a bunch of the files and folders were missing. However, copy seemed reliable.