Ngoc Phuong
Ngoc Phuong
2829 views1 comments

[GO] Handle file uploading with Rclone in a Golang project

Rclone is a great project that helps manage files across multi-cloud storage providers. This project has more than 32k stars on Github.

Rclone is a command-line program to manage files on cloud storage. It is a feature-rich alternative to cloud vendors' web storage interfaces. Over 40 cloud storage products support rclone including S3 object stores, business & consumer file storage services, as well as standard transfer protocols.

https://rclone.org/#about

Rclone shipped as a command-line program and is easy for the end users to configure and run.

But what if you want to use the Rclone feature in your Golang project? Luckily, Rclone was written in Golang and it’s possible for us to directly use Rclone API in our project.

In this post, I will demonstrate the usage of the Rclone API to handle file uploading.

Assume that we already have an api that receives the Multipart form request, this api will take the request and expose a *multipart.FileHeader for us to use in the next step.

Because Rclone support lots of Storage Provider and we may want to use these providers in our code, we will need to create an abstract interface for the future implementation.

Define the type and interface

import (
	"context"
	"github.com/rclone/rclone/fs"
	"github.com/rclone/rclone/fs/object"
)

type FileInfo struct {
	Disk string `json:"disk,omitempty"`
	Path string `json:"path,omitempty"`
	Type string `json:"type,omitempty"`
	Size int    `json:"size,omitempty"`
}

type Disk interface {
	Name() string
	Url(filepath string) string
	Delete(c context.Context, filepath string) error
	Put(c context.Context, in io.Reader, size int64, mime, dst string) (*FileInfo, error)
	Multipart(c context.Context, m *multipart.FileHeader, dsts ...string) (*FileInfo, error)
}

type RcloneDisk struct {
	fs.Fs
	DiskName string `json:"name"`
	Root     string
}

func (r *RcloneDisk) Name() string {
	return r.DiskName
}
  • Disk is an interface that we will use later
  • FileInfo holds the uploaded file information
  • RcloneDisk is a base implementation of the Disk interface, the future disks we create will extend from this base

Take a look at Rclone Fs interface and we have this:

type Fs interface {
...
// Put in to the remote path with the modTime given of the given size
//
// When called from outside an Fs by rclone, src.Size() will always be >= 0.
// But for unknown-sized objects (indicated by src.Size() == -1), Put should either
// return an error or upload it properly (rather than e.g. calling panic).
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
Put(ctx context.Context, in io.Reader, src ObjectInfo, options ...OpenOption) (Object, error)

Rclone already implements the Fs interface for each supported provider, the full list can be found here

Create the Put method for the base struct RcloneDisk

func (r *RcloneDisk) Put(ctx context.Context, reader io.Reader, size int64, mime, dst string) (*FileInfo, error) {
	objectInfo := object.NewStaticObjectInfo(
		dst,
		time.Now(),
		size,
		true,
		nil,
		nil,
	)

	rs, err := r.Fs.Put(ctx, reader, objectInfo)

	if err != nil {
		return nil, err
	}

	return &FileInfo{
		Disk: r.DiskName,
		Path: dst,
		Type: mime,
		Size: int(rs.Size()),
	}, nil
}

As defined in the Rclone Fs interface, the Put method receives these parameters:

  • ctx: A context for easily controlling the method behavior
  • reader: An io.Reader object that holds the file upload reader, we will get this reader from the *multipart.FileHeader
  • src: A rclonefs.ObjectInfo object that holds the file information. In order to create this object will have to collect some additional information as file size, upload destination

Create the Multipart method for the base struct RcloneDisk

func (r *RcloneDisk) Multipart(ctx context.Context, m *multipart.FileHeader, dsts ...string) (*FileInfo, error) {
	f, err := m.Open()

	if err != nil {
		return nil, err
	}

	fileHeader := make([]byte, 512)

	if _, err := f.Read(fileHeader); err != nil {
		return nil, err
	}

	if _, err := f.Seek(0, 0); err != nil {
		return nil, err
	}

	dst := ""
	mime := http.DetectContentType(fileHeader)

	if len(dsts) > 0 {
		dst = dsts[0]
	} else {
		dst = r.UploadFilePath(m.Filename)
	}

	return r.Put(ctx, f, m.Size, mime, dst)
}

Create the default file path for every upload

func (r *RcloneDisk) UploadFilePath(filename string) string {
    var filenameRemoveCharsRegexp = regexp.MustCompile(`[^a-zA-Z0-9_\-\.]`)
    var dashRegexp = regexp.MustCompile(`\-+`)
	now := time.Now()
	filename = filenameRemoveCharsRegexp.ReplaceAllString(filename, "-")
	filename = dashRegexp.ReplaceAllString(filename, "-")
	filename = strings.ReplaceAll(filename, "-.", ".")
	return path.Join(
		r.Root,
		strconv.Itoa(now.Year()),
		fmt.Sprintf("%02d", int(now.Month())),
		fmt.Sprintf("%d_%s", now.UnixMicro(), filename),
	)
}

func (r *RcloneDisk) Delete(ctx context.Context, filepath string) error {
	obj, err := r.Fs.NewObject(ctx, filepath)

	if err != nil {
		return err
	}

	return obj.Remove(ctx)
}

Note:

Because will Read the file to check its mime f.Read(fileHeader), we have to seek it at the start of the io.Reader to avoid file corruption: f.Seek(0, 0)

The basic implementation of the RcloneDisk is done, we will create the S3 Disk and the Local Disk that extend this base disk in the steps bellow.

Create the local disk: Save file to the local storage

type RcloneLocal struct {
	*RcloneDisk
	Root      string        `json:"root"`
	BaseUrl   string        `json:"base_url"`
}

func (r *RcloneLocal) Url(filepath string) string {
	return r.BaseUrl + "/" + filepath
}

This struct contains extra properties:

  • Root: The root directory for the upload
  • Baseurl: The base url for the public file url

Create the local disk:

import (
	"context"
	"os"

	"github.com/rclone/rclone/backend/local"
	"github.com/rclone/rclone/fs/config/configmap"
)

func NewLocal() Disk {
	rl := &RcloneLocal{
		BaseRcloneDisk: &BaseRcloneDisk{
			DiskName: "local_disk",
		},
		Root:      "/var/www/html/files",
		BaseUrl:   "/files",
	}

	cfgMap := configmap.New()
	cfgMap.Set("root", rl.Root)
	fsDriver, err := local.NewFs(context.Background(), rl.DiskName, rl.Root, cfgMap)

	if err != nil {
		panic(err)
	}

	rl.Fs = fsDriver

	return rl
}

In this code, we import the Local backend that implemented the Rclone Fs: "github.com/rclone/rclone/backend/local"

Create the S3 disk: Save file to AWS S3 or S3 compatible storage providers

type RcloneS3 struct {
	*RcloneDisk
	Root            string              `json:"root"`
	Provider        string              `json:"provider"`
	Bucket          string              `json:"bucket"`
	Region          string              `json:"region"`
	Endpoint        string              `json:"endpoint"`
	ChunkSize       fs.SizeSuffix       `json:"chunk_size"`
	AccessKeyID     string              `json:"access_key_id"`
	SecretAccessKey string              `json:"secret_access_key"`
	BaseUrl         string              `json:"base_url"`
	ACL             string              `json:"acl"`
}

func (r *RcloneS3) Url(filepath string) string {
	return r.BaseUrl + filepath
}

Create the S3 disk

import (
	"context"

	"github.com/rclone/rclone/backend/s3"
	"github.com/rclone/rclone/fs"
	"github.com/rclone/rclone/fs/config/configmap"
)


func NewS3(cfg *RcloneS3Config) fs.FSDisk {
	rs3 := &RcloneS3{
		BaseRcloneDisk: &BaseRcloneDisk{
			DiskName: "DO_DISK",
			Root:     "/files",
		},
		Root:            "/files",
		Provider:        "DigitalOcean",
		Region:          "sfo3",
		Endpoint:        "sfo3.digitaloceanspaces.com",
		ChunkSize:       rclonefs.SizeSuffix(1024 * 1024 * 5),
		AccessKeyID:     "AccessKeyID",
		SecretAccessKey: "SecretAccessKey",
		BaseUrl:         "https://cdn.mysite",
		ACL:             "public-read",
	}

	cfgMap := &configmap.Simple{}
	cfgMap.Set("provider", rs3.Provider)
	cfgMap.Set("bucket", rs3.Bucket)
	cfgMap.Set("region", rs3.Region)
	cfgMap.Set("endpoint", rs3.Endpoint)
	cfgMap.Set("chunk_size", rs3.ChunkSize.String())
	cfgMap.Set("access_key_id", rs3.AccessKeyID)
	cfgMap.Set("secret_access_key", rs3.SecretAccessKey)
	cfgMap.Set("acl", rs3.ACL)
	cfgMap.Set("bucket_acl", rs3.ACL)

	fsDriver, err := s3.NewFs(context.Background(), "s3", rs3.Bucket, cfgMap)

	if err != nil {
		panic(err)
	}

	rs3.Fs = fsDriver

	return rs3
}

In this code, we import the S3 backend that implemented the Rclone Fs: "github.com/rclone/rclone/backend/s3"

Usage

var localDisk = NewLocal()
var s3Disk = NewS3()

func Upload(c server.Context) error {
    ctx := context.Background()
    fileHeader := c.File("file") // *multipart.FileHeader
    localDisk.Multipart(ctx, fileHeader) // Save file to the local storage
    s3Disk.Multipart(ctx, fileHeader) // Save file to the S3 compatible storage
}

Conclusion

In this post, we just talked about the File Saving process using Rclone, there are many features that Rclone offers. By looking into the source code we can directly use Rclone APIs without using its binary.

Thank you for spending time on my post, if you have any questions, don’t hesitate to leave a comment here!


Discussion (1)

rakaka8537  June 27, 2023 14:50 UTC

ádasd

LoginRegisterTopics