shithub: s3


branches: front

Clone

clone: git://shithub.us/moody/s3 gits://shithub.us/moody/s3
push: hjgit://shithub.us/moody/s3
patches to: moody@posixcafe.org

Last commit

0591d4ae – Jacob Moody <moody@posixcafe.org> authored on 2025/09/28 00:27
update docs, add man page, readme is just nroff'd manpage now

About


     s3(1)                                                       s3(1)

     NAME
          s3/cat, s3/ls, s3/rm, s3/cp, s3/write, s3/factotum

     SYNOPSIS
          s3/cat [ -o offset ] [ -n count ] s3://bucket/file
          s3/ls s3://bucket/prefix/
          s3/rm s3://bucket/file
          s3/cp file s3://bucket/destination
          s3/cp s3://bucket/source file
          s3/write s3://bucket/file
          s3/write -l
          s3/factotum [ -p ] [ -s srvname ] [ -m mntpt ]

     DESCRIPTION
          These tools provide rudimentary access to data stored within
          s3 (or API compatible) buckets.  The tools do little to
          paper over the realities of the s3 API, and mostly aim to
          map directly to different HTTP verbs where applicable.

          s3/cat issues a GET request against the provided path and
          outputs the result to standard out.

          s3/rm issues a DELETE request against the provided path.

          s3/ls issues a GET request against the path, and attempts to
          parse the XML output as a directory listing.

          s3/cp copies files to and from a s3 bucket.  When conducting
          an upload an attempt is made to provide the correct mime
          type of the file through the use of file(1).

          s3/write reads standard input and uploads the content to a
          remote object chunks at a time using multipart uploads.  S3
          multipart uploads are a stateful interface, they must be
          created, referenced, then either aborted or completed.  As
          such, it is feasible that one may wind up with zombie mul-
          tipart uploads.  The -l flag may be used to query the list
          of currently open multipart uploads.  It prints out invoca-
          tions of s3/rm that can be used to abort the upload.

     CREDENTIALS AND PARAMETERS
          s3/factotum provides an overlay factotum for handling the
          signing of requests using the $AWS_SECRET_ACCESS_KEY.  Each
          tool presents s3/factotum with the paramaters to sign, the
          secret access key is not directly retrievable after being
          stored.  s3/factotum can store multiple sets of keys, and as
          such each secret access key must be provided with a paired
          access id for disambiguation. The following is an example of
          adding a key pair to s3/factotum:

          % echo 'key proto=aws4 access=blah !secret=whatever' >/mnt/factotum/ctl

          s3/factotum by default mounts itself over top of
          /mnt/factotum and does not post itself to /srv. This
          behavior may be changed with the -m and -s flags respect-
          fully.

          Each tool needs to be provided both the access key and the
          endpoint, and optionally a region.  These parameters may be
          specified using either their standard environment variable
          names, or through command line flags.  $AWS_ACCESS_KEY_ID
          and the -a flag may be used to specify the access id.
          $AWS_ENDPOINT_URL_S3 and the -u flag may be used to specify
          the endpoint url.  $AWS_DEFAULT_REGION and the -r flag may
          be used to specify the region, defaulting to "auto" if not
          set.

     SEE ALSO
          webfs(4), factotum(4)

     BUGS
          Only AWS4-HMAC-SHA256 signatures are supported.  Only tested
          against Cloudflare's R2™.  No direct s3:// → s3:// copying.