Skip to main content

12 posts tagged with "IBC S6"

View All Tags

· 8 min read
Josh Fraser
info

Note: Firefox Send was archived by Mozilla in September 2020, but as an open-source project, the source code was left available. We've based our integration on the Send fork maintained by Tim Visée.

Overview

In this tutorial, we're looking at how we integrated IBC S6 secure object storage as the back-end storage for the open-source file-sharing application, Firefox Send.

Firefox Send is a free, end-to-end encrypted file-sharing application developed by Mozilla, that allows users to easily and safely share files over the Web. The Send back-end is written in Node.js, allowing us to integrate with the Ionburst Cloud Node.js SDK.

Digging into the Send source code

From a cursory review of the source code and running the application locally, it looked like the focus of our integration would be the server directory, which contains the code for Send's back-end services.

Of particular interest was the storage sub-directory, which contains the functionality for integrating Send with the following:

  • Local filesystem storage;
  • Google Cloud Storage;
  • Amazon S3;

A review of these files outlined common pieces of functionality expected from storage integrations:

  • length – returns the object size from the configured storage method;
  • getStream – retrieves the object from the configured storage method;
  • set – uploads or writes the object to the configured storage method;
  • del – removes the object from the configured storage method;

At time of integration, IBC S6 provided functionality for three out of four, as it did not expose the ability to query object size. Further digging suggested that the length function was being used to set the Content-Length header on the file download response to the user, so wasn't a functional requirement for the storage integration.

Since we did this original integration work, we've added a HEAD API method, which can be used to query the size of objects stored in IBC S6.

Exploring the storage sub-directory also confirmed how Send handles object metadata. In ‘development', Send uses a local, in-memory store to track each object, but is designed to use Redis in production. To gain a better understanding of the Send application, all of our integration work was carried out using Redis as the Send metadata store.

The final checks required were to see how the Send back-end handled storage configuration. The base back-end configuration is handled in the config.js file found in the server directory, which defines the storage method selected by the index.js file found in the storage sub-directory.

Integrating IBC S6 - Configuration

To begin Integrating IBC S6 with Send, we first had to add new configuration options to the Send project so it could use IBC S6 as the new storage method, along with the initial Ionburst Cloud SDK configuration.

The Ionburst Cloud SDK was added to the project using npm:

npm install ionburst-sdk-javascript

A local Redis instance was deployed to track Send metadata using Docker:

docker run -ti -p 6379:6379 redis:latest

A config.json file was added to the root of the Send project for the Ionburst Cloud SDK configuration file.

{
"Ionburst": {
"Profile": "example",
"IonburstUri": "https://api.example.ionburst.cloud/",
"TraceCredentialsFile": "ON"
}
}

A new configuration item was then added to the Send config.js file for IBC S6. Note: this configuration entry is only used to select IBC S6 as the chosen back-end storage, and does not perform any other configuration. The redis_host entry was also adjusted to 127.0.0.1 to override the local memorystore:

const conf = convict({
ionburst: {
format: String,
default: 'true'
},
--- truncated ---
redis_host: {
format: String,
default: '127.0.0.1',
env: 'REDIS_HOST'
},
--- truncated ---
});

A configuration option was added to the storage index.js file, to ensure IBC S6 was selected as the storage method:

class DB {
constructor(config) {
let Storage = null;
if (config.ionburst) {
Storage = require('./ionburst');
} else if (config.s3_bucket) {
Storage = require('./s3');
} else if (config.gcs_bucket) {
Storage = require('./gcs');
} else {
Storage = require('./fs');
}
this.log = mozlog('send.storage');
this.storage = new Storage(config, this.log);
this.redis = createRedisClient(config);
this.redis.on('error', err => {
this.log.error('Redis:', err);
});
}
--- truncated ---
}

Finally, an ionburst.js file was added to the storage sub-directory, and a constructor created for applicable configuration:

class IonburstStorage {
constructor(config, log) {
this.log = log;
}

Integrating IBC S6 – File Operations

IBC S6 PUT

From the storage index.js file, we can see how Send kicks off a file upload to its configured storage:

async set(id, file, meta, expireSeconds = config.default_expire_seconds) {
const prefix = getPrefix(expireSeconds);
const filePath = `${prefix}-${id}`;
await this.storage.set(filePath, file);
this.redis.hset(id, 'prefix', prefix);
if (meta) {
this.redis.hmset(id, meta);
}
this.redis.expire(id, expireSeconds);
}

In this snippet, we can see that Send generates an identifier for each file stored, before passing it and the file to the configured storage method. As IBC S6 has no preference to how a given object is identified, we can simply pass this identifier to IBC S6 too.

To upload the file to IBC S6, the following function was created in ionburst.js:

set(id, file) {
return new Promise((resolve, reject) => {
const putPath = path.join(this.dir, id);
const fstream = fs.createWriteStream(putPath);
file.pipe(fstream);
file.on('error', err => {
fstream.destroy(err);
});
fstream.on('error', err => {
fs.unlinkSync(putPath);
reject(err);
});
fstream.on('finish', async function() {
var upload_data = fs.readFileSync(putPath);
let put = await ionburst.putAsync({
id: id,
data: upload_data
});
console.log(put);
fs.unlink(putPath, function(error) {
if (error) {
throw error;
}
});
resolve();
});
});
}

We encountered some issues passing the file object directly to the Ionburst Cloud SDK. To overcome this, we instead leveraged the existing filesystem functionality to write the file to a temporary directory, create a read stream for the Ionburst Cloud SDK, then remove the temporary file after successful upload.

This temporary file/directory leveraged functionality used by Send's file-system storage, and it was simply a matter of pulling the temporary directory configuration into the Ionburst storage constructor:

class IonburstStorage {
constructor(config, log) {
this.log = log;
this.dir = config.file_dir;
mkdirp.sync(this.dir);
}

IBC S6 GET

Similar to the upload function, the main download functionality can be found in the storage index.js file:

async get(id) {
const filePath = await this.getPrefixedId(id);
console.log(filePath);
return this.storage.getStream(filePath);
}

To keep things simple, we replicated the same temporary file functionality for the file download from IBC S6:

async getStream(id) {
let data = await ionburst.getAsync(id);
var getPath = path.join(this.dir, id);
fs.writeFileSync(getPath, data);
var returnData = fs.createReadStream(getPath);
returnData.on('end', function() {
fs.unlink(getPath, function(error) {
if (error) {
throw error;
}
});
});
return returnData;
}

We first grab the file from IBC S6, write it to the temporary directory, then create and return a read stream.

IBC S6 DELETE

Send requires delete functionality from the configured storage method to remove uploaded files once they have reached their download limit, or expiry time.

The IBC S6 delete function was simple to implement:

del(id) {
return ionburst.delete(id, function(err, data) {
if (err) {
throw err;
}
console.log(data);
});
}

Caveats

Content-Length

At the time of integration, IBC S6 had no method of returning a stored object's size, nor did the Send metadata store it. As IBC S6 now has a HEAD API method, this can be added to the implementation to pass the Content-Length header on the file download reponse.

File Size

Depending on the deployment, Send can handle files up to 2.5GB. IBC S6 currently supports a maximum object size of 50MB, with larger objects requiring client-side processing before upload.

As a simple proof-of-concept, we've kept this 50MB limit in place for our fork of Send. However, since the time of integration, we've started to add our new SDK Manifests feature, which allows the Ionburst Cloud SDK to handle objects larger than 50MB. Once manifests have been added to our Node.js SDK, this functionality will be added to our Send fork.

Conclusion

All in all, integrating Firefox Send with IBC S6 was a relatively quick and simple process, providing us an opportunity to try out our Node.js SDK in an exisiting application. To try our Send fork for yourself, please check out our Getting started with Send and IBC S6 and Secure file-sharing with IBC S6, Firefox Send and the AWS free tier tutorials.

The full project source for our Send fork can be found here.

We'd also like to say thanks to the Mozilla team for building the original Send application, and to Tim Visée for maintaining the main Send fork.

· 7 min read
Iain Sutherland

Previously, I wrote a tutorial about how I, as a proficient C# developer, would go about converting code using the AWS .NET SDK for S3 to instead use the Ionburst .NET SDK. The conclusion of that being that it was a very trivial exercise.

I’ve been asked, could you do the same with the Go language? Well I’m hardly a proficient Go developer, having spent just a few months using it, but I don’t think it will be a massive undertaking. Let’s give it a go.

Prerequisites

To get started, you'll need appropriate credentials. For this tutorial, I have both AWS SDK credentials and Ionburst Cloud credentials set up on my PC. If you are already familiar with AWS credentials then you may already have a .aws/credentials credentials file in your home directory. For this example, we'll be using credentials setup in the default profile.

The Ionburst Cloud SDKs can also use a credentials file in your home directory, .ionburst/credentials and, like AWS, can also use environment variables. The Ionburst Cloud .NET SDK documentation gives further detail on how the SDK can be configured.

The starting position

It wasn’t quite as trivial to cobble together a Go program using the AWS Go SDK as it had been for me to write the C# starting position, but I came up with a similar program.

It functions the same. Namely, upload a file called image.jpg to an S3 bucket, fetch the object from the bucket, storing it locally as image_download.jpg, before removing the object from the bucket.

Our simple program:

package main
import (
"fmt"
"io"
"os"
"strings"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3"
)
const (
bucketName = "ibc-example"
)
type storageInterface interface {
UploadFile(fileName string) (bool, error)
FetchFile(fileName string) (bool, error)
DeleteFile(fileName string) (bool, error)
}
type awsClientWrapper struct {
client *s3.S3
}
func main() {
storageClient, err := CreateS3Client()
if err != nil {
fmt.Println("Failed to create storage client")
os.Exit(1)
}
upload, err := storageClient.UploadFile("image.jpg")
if err != nil {
fmt.Println("Failed to upload file")
}
if upload {
fmt.Println("File uploaded")
fetch, err := storageClient.FetchFile("image.jpg")
if err != nil {
fmt.Println("Failed to fetch file")
}
if fetch {
fmt.Println("File fetched")
delete, err := storageClient.DeleteFile("image.jpg")
if err != nil {
fmt.Println("Failed to delete file")
}
if delete {
fmt.Println("File delete")
}
}
}
}
func CreateS3Client() (storageInterface, error) {
sess, err := session.NewSession(&aws.Config{
Region: aws.String("eu-west-1")},
)
if err != nil {
return nil, err
}
return awsClientWrapper{client: s3.New(sess)}, nil
}
func (c awsClientWrapper) DeleteFile(key string) (bool, error) {
var deleteInput *s3.DeleteObjectInput = &s3.DeleteObjectInput{
Bucket: aws.String(bucketName),
Key: aws.String(key)}
_, err := c.client.DeleteObject(deleteInput)
if err != nil {
return false, err
}
var headInput *s3.HeadObjectInput = &s3.HeadObjectInput{
Bucket: aws.String(bucketName),
Key: aws.String(key)}
headErr := c.client.WaitUntilObjectNotExists(headInput)
if headErr != nil {
return false, headErr
}
return true, nil
}
func (c awsClientWrapper) UploadFile(fileName string) (bool, error) {
file, fileErr := os.Open(fileName)
if fileErr != nil {
return false, fileErr
}
var putInput *s3.PutObjectInput = &s3.PutObjectInput{
Bucket: aws.String(bucketName),
Key: aws.String(fileName),
Body: file}
_, err := c.client.PutObject(putInput)
if err != nil {
return false, err
}
defer file.Close()
return true, nil
}
func (c awsClientWrapper) FetchFile(key string) (bool, error) {
var fetchInput *s3.GetObjectInput = &s3.GetObjectInput{
Bucket: aws.String(bucketName),
Key: aws.String(key)}
fetchOutput, err := c.client.GetObject(fetchInput)
if err != nil {
return false, err
}
parts := strings.Split(key, ".")
var outputFileName string = "download"
if len(parts) == 1 {
outputFileName = fmt.Sprintf("%s_download", parts[0])
}
if len(parts) == 2 {
outputFileName = fmt.Sprintf("%s_download.%s", parts[0], parts[1])
}
if len(parts) > 2 {
outputFileName = fmt.Sprintf("%s.download.%s", parts[0], parts[len(parts)-1])
}
file, fileErr := os.Create(outputFileName)
if fileErr != nil {
return false, err
}
defer file.Close()
_, err = io.Copy(file, fetchOutput.Body)
return true, nil
}

The conversion

So how do I change that code to use the Ionburst Cloud Go SDK?

Well package management is slightly different in that there’s nothing like NuGet for Go and the “import{}” directive seems to be automatically handled by Visual Studio Code, unlike the C# using statements which are manually maintained.

Anyway, first off I’ll change the structure containing the client. In hindsight I should probably have just called the structure clientWrapper and then I wouldn’t have even needed to change the name. It’s just a case of replacing *s3.S3 with *ionburst.Client:

type awsClientWrapper struct {
client *s3.S3
}
to
type ionburstClientWrapper struct {
client *ionburst.Client
}

Next, I then replace the CreateS3Client function with a CreateIonburstClient function which looks like:

func CreateIonburstClient() (storageInterface, error) {
client, err := ionburst.NewClient()
if err != nil {
return nil, err
}
return ionburstClientWrapper{client: client}, nil
}

With this complete, the first line in main() becomes:

storageClient, err := CreateIonburstClient()

Next, the functions have to be changed to use ionburstClientWrapper instead of awsCientWrapper, and the final step is to change the places where a function call is made to c.client because that has changed from *s3.S3 to *ionburst.Client.

Using Visual Studio Code, I was already being presented with the squiggly red lines to show me the functions that were not now defined for *ionburst.Client.

With the Ionburst Cloud Go SDK there are no request and response structures, everything is done with parameters and discrete return values. The UploadFile function ends up a bit smaller, becoming:

func (c ionburstClientWrapper) UploadFile(fileName string) (bool, error) {
file, fileErr := os.Open(fileName)
if fileErr != nil {
return false, fileErr
}
err := c.client.Put(fileName, file, "")
if err != nil {
return false, err
}
defer file.Close()
return true, nil
}

It’s worth noting that the Go SDK Put function does require a classification to be specified. In the event a classification isn't supplied, a default is applied, so I've left it empty.

The FetchFile and DeleteFile required similar changes; c.client.GetObject() becomes just c.client.Get(), and in the delete function the client call becomes c.client.Delete(). For both, the request structure can be removed and the return values are not wrapped in a structure.

Conclusion

So I would say that for a proficient Go developer, migrating from the AWS Go SDK for S3 to the Ionburst Cloud Go SDK would be as trivial an exercise as I experienced doing the same thing with .NET.

In terms of coding time, it did take less time to make the changes to use the Ionburst Cloud Go SDK than it did to develop the sample program, but since I have less experience with Go, the original program was the more difficult undertaking, while the conversion was just changing a few lines of existing code.

The full converted code can be found below:

package main
import (
"fmt"
"io"
"os"
"strings"
"gitlab.com/ionburst/ionburst-sdk-go"
)
const (
bucketName = "ibc-example"
)
type storageInterface interface {
UploadFile(fileName string) (bool, error)
FetchFile(fileName string) (bool, error)
DeleteFile(fileName string) (bool, error)
}
type ionburstClientWrapper struct {
client *ionburst.Client
}
func main() {
storageClient, err := CreateIonburstClient()
if err != nil {
fmt.Println("Failed to create storage client: ", err)
os.Exit(1)
}
upload, err := storageClient.UploadFile("image.jpg")
if err != nil {
fmt.Println("Failed to upload file")
}
if upload {
fmt.Println("File uploaded")
fetch, err := storageClient.FetchFile("image.jpg")
if err != nil {
fmt.Println("Failed to fetch file")
}
if fetch {
fmt.Println("File fetched")
delete, err := storageClient.DeleteFile("image.jpg")
if err != nil {
fmt.Println("Failed to delete file")
}
if delete {
fmt.Println("File deleted")
}
}
}
}
func CreateIonburstClient() (storageInterface, error) {
client, err := ionburst.NewClient()
if err != nil {
return nil, err
}
return ionburstClientWrapper{client: client}, nil
}
func (c ionburstClientWrapper) DeleteFile(key string) (bool, error) {
err := c.client.Delete(key)
if err != nil {
return false, err
}
return true, nil
}
func (c ionburstClientWrapper) UploadFile(fileName string) (bool, error) {
file, fileErr := os.Open(fileName)
if fileErr != nil {
fmt.Println("File open error: ", fileErr)
return false, fileErr
}
err := c.client.Put(fileName, file, "")
if err != nil {
fmt.Println("Ionburst upload error: ", err)
return false, err
}
defer file.Close()
return true, nil
}
func (c ionburstClientWrapper) FetchFile(key string) (bool, error) {
fetchReader, err := c.client.Get(key)
if err != nil {
return false, err
}
parts := strings.Split(key, ".")
var outputFileName string = "download"
if len(parts) == 1 {
outputFileName = fmt.Sprintf("%s_download", parts[0])
}
if len(parts) == 2 {
outputFileName = fmt.Sprintf("%s_download.%s", parts[0], parts[1])
}
if len(parts) > 2 {
outputFileName = fmt.Sprintf("%s.download.%s", parts[0], parts[len(parts)-1])
}
file, fileErr := os.Create(outputFileName)
if fileErr != nil {
return false, err
}
defer file.Close()
_, err = io.Copy(file, fetchReader)
return true, nil
}

· 6 min read
Josh Fraser

Overview

As covered previously in our GitLab backup tutorial, we (Ionburst) use IBC S6 to protect the backups from our internal GitLab instance. In this tutorial, we will look at the other way we use IBC S6 with GitLab; as artifact storage for our GitLab CI/CD builds.

For this tutorial, we'll be using a GitLab repository containing the base code for our namegen utility. We'll add a GitLab CI/CD config file to build the application, then use IonFS CLI to store the build artifact in IBC S6. Finally, we'll download the artifact to the local machine using IonFS CLI, and execute it.

One key point to note for this tutorial: we will be using an IonFS metadata repository set up in Amazon S3. As our GitLab CI/CD pipelines will be running in a containerised, ephemeral environment, this ensures the IonFS metadata is persisted elsewhere. It will also allow the build artifact to be downloaded to the local machine at the end of the tutorial.

Shared Responsibility Model Breakdown

Customer Responsibility

  • You, the customer, are responsible for the secure management of the Ionburst Cloud credentials used by ionfs.
  • You, the customer, are responsible for the security of ionfs metadata repositories and the metadata stored in them.
  • You, the customer, are responsible for the security of the GitLab application and underlying instance - if self-hosting GitLab.
  • You, the customer, are responsible for the security of the GitLab projects used in conjunction with this tutorial.

Ionburst Cloud Responsibility

  • We are responsible for the security of GitLab backup data stored in IBC S6 using ionfs.
  • We are responsible for the underlying security and availability of the Ionburst Cloud platform.

GitLab CI/CD

To enable GitLab CI/CD for a project, a file, gitlab-ci.yml, is added to the root of the project. Within this file, the concepts of stages and jobs are used to define the tasks needed to build, test, and deploy an application or piece of software.

For this tutorial, we will be creating a single build stage, with a job that will compile our namegen application, then upload the binary to IBC S6.

Setting up the repository

First, we'll add a .gitlab-ci.yml file to the root of our project.

image: jishf/golang-runner-1.19
stages:
- build
compile_linux_amd64:
stage: build
before_script:
- export PATH=$PATH:/usr/local/go/bin
- mkdir -p ~/.ionfs
- cp $IONFS_CONFIG ~/.ionfs/appsettings.json
- mkdir -p build
- export RELEASE_VERSION=0.1.0
- go get -v -d
script:
- GOOS=linux GOARCH=amd64 go build -o ./build/namegen-linux-amd64-$RELEASE_VERSION -ldflags="-X=main.appVersion=$RELEASE_VERSION"
- ionfs put build/namegen-linux-amd64-$RELEASE_VERSION ion://builds/
- ionfs ls ion://builds/

To break this file down:

  • image is the Docker image we're going to run our pipeline with, the defined image has both Go and ionfs installed.
  • stages defines the different pipeline stages, we're only specifying a build stage.
  • compile_linux_amd64:
    • this is our configured job, running in the build stage
    • the before_script is ensuring the go command is available in the $PATH, creating the ~/.ionfs config directory and file, creating a build directory, then setting an environment variable with semantic version for our build.
    • the script is building our namegen binary, and embedding our version in the build, then uploading the binary to IBC S6, before listing the IonFS metadata repository.

Before we commit the .gitlab-ci.yml file to our repository, we first need to add some environment variables to our repository CI/CD configuration:

  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY
  • AWS_REGION
  • IONBURST_ID
  • IONBURST_KEY
  • IONBURST_URI
  • IONFS_CONFIG

The IONFS_CONFIG variable should be of type "file" and look like the following:

{
"IonFS": {
"MaxSize": "50000000",
"Verbose": "false",
"DefaultClassification": "Restricted",
"Repositories": [
{
"Name": "builds",
"Usage": "Data",
"Class": "Ionburst.Apps.IonFS.Repo.S3.MetadataS3",
"Assembly": "Ionburst.Apps.IonFS.Repo.S3",
"DataStore": "ibc-example"
}
],
"DefaultRepository": "builds"
}
}

Once these environment variables have been configured, we can commit the gitlab-ci.yml to our repository to kick off the first pipeline, which looks something like:

Running with gitlab-runner 15.5.0 (0d4137b8)
on ionburst-runner-public vyQm-zNw
Preparing the "docker" executor 00:02
Using Docker executor with image jishf/golang-runner-1.19 ...
Pulling docker image jishf/golang-runner-1.19 ...
Using docker image sha256:cb41b40fca0d36126a9eccf01b1b0e6d6fd4b55380314543b9450b8fd1ba9142 for jishf/golang-runner-1.19 with digest jishf/golang-runner-1.19@sha256:6730a3a0603d91780e15a297a2fcc1ae32de5b5213afc639ca4a6e035118e2c6 ...
Preparing environment 00:01
Running on runner-vyqm-znw-project-40659015-concurrent-0 via ionburst-runner-public...
Getting source from Git repository 00:01
Fetching changes with git depth set to 20...
Reinitialized existing Git repository in /builds/ionburst/ionfs-cicd-example/.git/
Checking out e3acd96e as main...
Skipping Git submodules setup
Executing "step_script" stage of the job script 00:08
Using docker image sha256:cb41b40fca0d36126a9eccf01b1b0e6d6fd4b55380314543b9450b8fd1ba9142 for jishf/golang-runner-1.19 with digest jishf/golang-runner-1.19@sha256:6730a3a0603d91780e15a297a2fcc1ae32de5b5213afc639ca4a6e035118e2c6 ...
$ export PATH=$PATH:/usr/local/go/bin
$ mkdir -p ~/.ionfs
$ cp $IONFS_CONFIG ~/.ionfs/appsettings.json
$ mkdir -p build
$ export RELEASE_VERSION=0.1.0
$ go get -v -d
$ GOOS=linux GOARCH=amd64 go build -o ./build/namegen-linux-amd64-$RELEASE_VERSION -ldflags="-X=main.appVersion=$RELEASE_VERSION"
$ ionfs put build/namegen-linux-amd64-$RELEASE_VERSION ion://builds/
$ ionfs ls ion://builds/
____ ___________
/ _/___ ____ / ____/ ___/
/ // __ \/ __ \/ /_ \__ \
_/ // /_/ / / / / __/ ___/ /
/___/\____/_/ /_/_/ /____/ v0.3.0
Directory of ion://builds/
namegen-linux-amd64-0.1.0 10/17/2022 18:35:38
Cleaning up project directory and file based variables 00:01
Job succeeded

Retrieving the artifact

Now that we've successfully deployed our pipeline, our artifact is safely stored in IBC S6, and ready to be consumed or accessed. For our internal usage at Ionburst, this is typically to be picked up by a container build, or to distribute internally.

We can demonstrate the latter by configuring the IonFS repo and appropriate credentials used in our pipeline on our local machine. Once setup, we can list the repo, download our namegen artifact, and run it locally:

ionfs ls ion://builds
ionfs get get ion://builds/namegen-linux-amd64-0.1.0 namegen
chmod +x namegen
./namegen

Example output:

[hello@ionfs-cicd-example ~]# ionfs ls
____ ___________
/ _/___ ____ / ____/ ___/
/ // __ \/ __ \/ /_ \__ \
_/ // /_/ / / / / __/ ___/ /
/___/\____/_/ /_/_/ /____/ v0.3.0
Directory of ion://builds/
namegen-linux-amd64-0.1.0 10/17/2022 18:35:38
[hello@ionfs-cicd-example ~]# ionfs get ion://builds/namegen-linux-amd64-0.1.0 namegen
[hello@ionfs-cicd-example ~]# chmod +x namegen
[hello@ionfs-cicd-example ~]# ./namegen
tender-boyd-orr

Wrapping up

In this tutorial, we've covered some background on GitLab CI/CD artifacts and how to protect them with IBC S6 and IonFS CLI.

All the steps covered in this tutorial are currently used by Ionburst Cloud to protect our internal build artifacts. To keep up with the latest developments on using IonFS CLI with GitLab CI/CD, please checkout out our example repository on GitHub.

· One min read
Josh Fraser
info

Note: This tutorial is based on the use of a new AWS account with access to 12 month free tier offers for EC2, ElastiCache and ALB.

In previous tutorials, we've looked at how to get started with Firefox Send and IBC S6, and at a deep dive on how we added IBC S6 as a back-end storage option. For this tutorial, we'll be deploying Firefox Send with IBC S6 using the AWS Free Tier, to enable secure file sharing in the cloud.

· 11 min read
Josh Fraser
info

Prerequisites - before you begin, please ensure:

Please also note:

  • This tutorial is based on a GitLab instance installed using the Omnibus deployment method, on Rocky Linux 8.6 - other deployment types may require additional steps.

Overview

The self-hosted version of GitLab is a popular tool for privacy-conscious developers, open-source projects, and organisations looking to keep full control of their source code (like us!).

As an organisation operating a GitLab instance internally, one of our key considerations is ensuring we store our GitLab backups in a secure manner. While GitLab provides a suite of functionality allowing backups to be stored on Cloud object storage, we were keen to protect backups of the underlying Ionburst Cloud source code with Ionburst Cloud, while also minimising our configuration overhead.

In this tutorial, we will be covering how to use IBC S6 secure object storage and IonFS CLI, to backup self-hosted GitLab instances.

Shared Responsibility Model Breakdown

Customer Responsibility

  • You, the customer, are responsible for the secure management of the Ionburst Cloud credentials used by ionfs.
  • You, the customer, are responsible for the security of ionfs metadata repositories and the metadata stored in them.
  • You, the customer, are responsible for the security of the GitLab application and underlying instance.

Ionburst Cloud Responsibility

  • We are responsible for the security of GitLab backup data stored in IBC S6 using ionfs.
  • We are responsible for the underlying security and availability of the Ionburst Cloud platform.

GitLab backups

When backing up a GitLab instance, there are two main data sources to consider:

  • the application data - database, repositories etc.
  • the configuration and secrets data - stored within /etc/gitlab

GitLab application backups are typically performed with the gitlab-backup tool. Assuming no additional backup options have been added to the GitLab configuration file, this tool creates a tar archive of all GitLab application data and saves it in a well-known directory: /var/opt/gitlab/backups/.

Depending on how the GitLab instance is used, this archive can end up extremely large (10s of GB), typically when CI/CD build artifacts and the container registry are included. The GitLab backup tool allows aspects of the GitLab application to be skipped when backing up, using the SKIP environment variable. To minimise the amount of data stored, this tutorial will skip the artifacts stage.

By default, the GitLab backup process automatically generates a filename for the backup archive using the current timestamp and version of GitLab installed. This generated filename allows GitLab to automatically manage backup archives stored locally.

However, as we will be transferring the backups to IBC S6, and to make the automation process easier, we will override this automatic name using the BACKUP environment variable.

Getting started

To begin, we need a user account setup on the underlying operating system of the GitLab instance, with sudo access - it is not recommended to use the root account.

IonFS CLI will also need to be installed on the GitLab instance, and configured with an IBC S6 data repository and Ionburst credentials file. For the purposes of this tutorial, we will use a local metadata repository stored in our user account's home directory.

Our sample ionfs configuration file:

{
"IonFS": {
"MaxSize": "50000000",
"Verbose": "false",
"DefaultClassification": "Restricted",
"Repositories": [
{
"Name": "gitlab-local",
"Usage": "Data",
"Class": "Ionburst.Apps.IonFS.Repo.LocalFS.MetadataLocalFS",
"Assembly": "Ionburst.Apps.IonFS.Repo.LocalFS",
"DataStore": "/home/josh/gitlab-local"
}
],
"DefaultRepository": "gitlab-local"
},
"Ionburst": {
"Profile": "gitlab",
"IonburstUri": "https://api.eu-west-1.ionburst.cloud/",
"TraceCredentialsFile": "OFF"
}
}

We can verify the IonFS configuration with:

ionfs repos

Which should return something similar to:

[josh@gitlab-example ~]$ ionfs repos
____ ___________
/ _/___ ____ / ____/ ___/
/ // __ \/ __ \/ /_ \__ \
_/ // /_/ / / / / __/ ___/ /
/___/\____/_/ /_/_/ /____/ v0.3.0
Available Repositories (*default):
* [d] ion://gitlab-local/ (Ionburst.Apps.IonFS.Repo.LocalFS.MetadataLocalFS)

Creating the GitLab backups

With ionfs successfully configured, we can now create our GitLab backups. For the application backup, we can use the following:

sudo gitlab-backup create BACKUP="ionfs-example" SKIP="artifacts"

This will look something like:

[josh@git ~]$ sudo gitlab-backup create BACKUP="ionfs-example" SKIP="artifacts"
2022-10-14 20:04:19 +0100 -- Dumping main_database ...
Dumping PostgreSQL database gitlabhq_production ... [DONE]
2022-10-14 20:04:24 +0100 -- Dumping main_database ... done
2022-10-14 20:04:24 +0100 -- Dumping ci_database ... [DISABLED]
2022-10-14 20:04:24 +0100 -- Dumping repositories ...
--- truncated ---
2022-10-14 20:04:28 +0100 -- Dumping repositories ... done
2022-10-14 20:04:28 +0100 -- Dumping uploads ...
2022-10-14 20:04:28 +0100 -- Dumping uploads ... done
2022-10-14 20:04:28 +0100 -- Dumping builds ...
2022-10-14 20:04:28 +0100 -- Dumping builds ... done
2022-10-14 20:04:28 +0100 -- Dumping artifacts ... [SKIPPED]
2022-10-14 20:04:28 +0100 -- Dumping pages ...
2022-10-14 20:04:28 +0100 -- Dumping pages ... done
2022-10-14 20:04:28 +0100 -- Dumping lfs objects ...
2022-10-14 20:04:28 +0100 -- Dumping lfs objects ... done
2022-10-14 20:04:28 +0100 -- Dumping terraform states ...
2022-10-14 20:04:28 +0100 -- Dumping terraform states ... done
2022-10-14 20:04:28 +0100 -- Dumping container registry images ... [DISABLED]
2022-10-14 20:04:28 +0100 -- Dumping packages ...
2022-10-14 20:04:28 +0100 -- Dumping packages ... done
2022-10-14 20:04:28 +0100 -- Creating backup archive: ionfs-example_gitlab_backup.tar ...
2022-10-14 20:04:29 +0100 -- Creating backup archive: ionfs-example_gitlab_backup.tar ... done
2022-10-14 20:04:29 +0100 -- Uploading backup archive to remote storage ... [SKIPPED]
2022-10-14 20:04:29 +0100 -- Deleting old backups ...
2022-10-14 20:04:29 +0100 -- Deleting old backups ... done. (0 removed)
2022-10-14 20:04:29 +0100 -- Deleting tar staging files ...
2022-10-14 20:04:29 +0100 -- Cleaning up /var/opt/gitlab/backups/backup_information.yml
2022-10-14 20:04:29 +0100 -- Cleaning up /var/opt/gitlab/backups/db
2022-10-14 20:04:29 +0100 -- Cleaning up /var/opt/gitlab/backups/repositories
2022-10-14 20:04:29 +0100 -- Cleaning up /var/opt/gitlab/backups/uploads.tar.gz
2022-10-14 20:04:29 +0100 -- Cleaning up /var/opt/gitlab/backups/builds.tar.gz
2022-10-14 20:04:29 +0100 -- Cleaning up /var/opt/gitlab/backups/pages.tar.gz
2022-10-14 20:04:29 +0100 -- Cleaning up /var/opt/gitlab/backups/lfs.tar.gz
2022-10-14 20:04:29 +0100 -- Cleaning up /var/opt/gitlab/backups/terraform_state.tar.gz
2022-10-14 20:04:29 +0100 -- Cleaning up /var/opt/gitlab/backups/packages.tar.gz
2022-10-14 20:04:29 +0100 -- Deleting tar staging files ... done
2022-10-14 20:04:29 +0100 -- Deleting backups/tmp ...
2022-10-14 20:04:29 +0100 -- Deleting backups/tmp ... done
2022-10-14 20:04:29 +0100 -- Warning: Your gitlab.rb and gitlab-secrets.json files contain sensitive data
and are not included in this backup. You will need these files to restore a backup.
Please back them up manually.
2022-10-14 20:04:29 +0100 -- Backup ionfs-example is done.

The filesystem location used by GitLab is locked down to the git user only, so before we can upload our backup to IBC S6, we need to move the archive to an accessible location and change the ownership to our user account:

sudo mv /var/opt/gitlab/backups/ionfs-example_gitlab_backup.tar /tmp/
sudo chown josh:josh /tmp/ionfs-example_gitlab_backup.tar
ls -lah /tmp/ionfs-example_gitlab_backup.tar

Example output:

[josh@git ~]$ sudo mv /var/opt/gitlab/backups/ionfs-example_gitlab_backup.tar /tmp/
[josh@git ~]$ ls -lah /tmp/ionfs-example_gitlab_backup.tar
-rw-------. 1 josh josh 244M Oct 14 20:04 /tmp/ionfs-example_gitlab_backup.tar

As noted in the application backup output, the GitLab configuration files have not been included in the backup. We can create a configuration backup with the following:

sudo tar -cf /tmp/gitlab-config.tar /etc/gitlab/
sudo chown josh:josh /tmp/gitlab-config.tar
ls -lah /tmp/gitlab-config.tar

Example output:

[josh@git ~]$ sudo tar -cf /tmp/gitlab-config.tar /etc/gitlab/
[josh@git ~]$ sudo chown josh:josh /tmp/gitlab-config.tar
[josh@git ~]$ ls -lah /tmp/gitlab-config.tar
-rw-r--r--. 1 josh josh 320K Oct 14 20:32 /tmp/gitlab-config.tar

Uploading the backups to IBC S6

Now that our GitLab application and configuration backups are ready, we can upload them to IBC S6 with ionfs.

First, we create a directory within the metadata repository:

ionfs mkdir ion://gitlab-backups

We can now upload each of our backups:

ionfs put /tmp/ionfs-example_gitlab_backup.tar ion://gitlab-backups/
ionfs put /tmp/gitlab-config.tar ion://gitlab-backups/

Finally, we can verify that the backups have uploaded successfully, and remove our local copies:

ionfs ls ion://gitlab-backups
rm -f /tmp/ionfs-example_gitlab_backup.tar
rm -f /tmp/gitlab-config.tar
[josh@git ~]$ ionfs mkdir ion://gitlab-backups
[josh@git ~]$ ionfs put /tmp/ionfs-example_gitlab_backup.tar ion://gitlab-backups/
[josh@git ~]$ ionfs put /tmp/gitlab-config.tar ion://gitlab-backups/
[josh@git ~]$ ionfs ls ion://gitlab-backups
____ ___________
/ _/___ ____ / ____/ ___/
/ // __ \/ __ \/ /_ \__ \
_/ // /_/ / / / / __/ ___/ /
/___/\____/_/ /_/_/ /____/ v0.3.0
Directory of ion://gitlab-local/gitlab-backups/
gitlab-backups/gitlab-config.tar 14/10/2022 20:40:17
gitlab-backups/ionfs-example_gitlab_backup.tar 14/10/2022 20:40:12
[josh@git ~]$ rm -f /tmp/ionfs-example_gitlab_backup.tar
[josh@git ~]$ rm -f /tmp/gitlab-config.tar

Building the backup script

Now that we've gone through the backup steps manually, we can have a look at wrapping the steps in a simple bash script that can then be used to automatically backup GitLab to IBC S6. We'll also add in some extra logic to add dates and other useful context to the backup filenames.

So let's take a look at the script:

#!/bin/bash
set -eou pipefail
## setup vars
date=$(date "+%Y%m%d-%H%M%S")
name="$date"_gitlab_backup.tar
data_path=/var/opt/gitlab/backups/
config_path=/etc/gitlab/
user=$(whoami)
## create backups
sudo gitlab-backup create BACKUP="$date" SKIP="artifacts"
sudo mv "$data_path/$name" /tmp/
sudo chown "$user:$user" /tmp/"$name"
sudo tar -cf "/tmp/$date"_gitlab_config.tar "$config_path"
sudo chown "$user:$user" "/tmp/$date"_gitlab_config.tar
## upload to IBC S6
ionfs put "/tmp/$name" ion://gitlab-backups/
ionfs put "/tmp/$date"_gitlab_config.tar ion://gitlab-backups/
## verify and delete local copies
ionfs ls ion://gitlab-backups/
rm -f "/tmp/$name"
rm -f "/tmp/$date"_gitlab_config.tar

Running through the script by section:

  • set -eou pipefail - this is used to ensure the script exits immediately in the event of any failures.
  • We're also setting up the following variables
    • $date is used to add a timestamp to our backup filenames, using the following format: 20221014-205835
    • name is the full filename of the application backup
    • $data_path is the filesystem location used by GitLab to store the application backup
    • $config_path is the location of the GitLab configuration files
    • $user is the current user, used to change the backup file ownership
  • We then create the backups, adding the variables to the manual steps above.
  • Once the backups are created, we upload them to IBC S6 using ionfs
  • Finally, we list the contents of the ionfs metadata repository, and remove the local copies of the backup files.

An example execution of the script would look like:

[josh@git ~]$ ./gitlab-backup.sh
[sudo] password for josh:
2022-10-14 20:59:07 +0100 -- Dumping main_database ...
Dumping PostgreSQL database gitlabhq_production ... [DONE]
2022-10-14 20:59:11 +0100 -- Dumping main_database ... done
2022-10-14 20:59:11 +0100 -- Dumping ci_database ... [DISABLED]
2022-10-14 20:59:11 +0100 -- Dumping repositories ...
--- truncated ---
2022-10-14 20:59:16 +0100 -- Dumping repositories ... done
2022-10-14 20:59:16 +0100 -- Dumping uploads ...
2022-10-14 20:59:16 +0100 -- Dumping uploads ... done
2022-10-14 20:59:16 +0100 -- Dumping builds ...
2022-10-14 20:59:16 +0100 -- Dumping builds ... done
2022-10-14 20:59:16 +0100 -- Dumping artifacts ... [SKIPPED]
2022-10-14 20:59:16 +0100 -- Dumping pages ...
2022-10-14 20:59:16 +0100 -- Dumping pages ... done
2022-10-14 20:59:16 +0100 -- Dumping lfs objects ...
2022-10-14 20:59:16 +0100 -- Dumping lfs objects ... done
2022-10-14 20:59:16 +0100 -- Dumping terraform states ...
2022-10-14 20:59:16 +0100 -- Dumping terraform states ... done
2022-10-14 20:59:16 +0100 -- Dumping container registry images ... [DISABLED]
2022-10-14 20:59:16 +0100 -- Dumping packages ...
2022-10-14 20:59:16 +0100 -- Dumping packages ... done
2022-10-14 20:59:16 +0100 -- Creating backup archive: 20221014-205835_gitlab_backup.tar ...
2022-10-14 20:59:16 +0100 -- Creating backup archive: 20221014-205835_gitlab_backup.tar ... done
2022-10-14 20:59:16 +0100 -- Uploading backup archive to remote storage ... [SKIPPED]
2022-10-14 20:59:16 +0100 -- Deleting old backups ...
2022-10-14 20:59:16 +0100 -- Deleting old backups ... done. (0 removed)
2022-10-14 20:59:16 +0100 -- Deleting tar staging files ...
2022-10-14 20:59:16 +0100 -- Cleaning up /var/opt/gitlab/backups/backup_information.yml
2022-10-14 20:59:16 +0100 -- Cleaning up /var/opt/gitlab/backups/db
2022-10-14 20:59:16 +0100 -- Cleaning up /var/opt/gitlab/backups/repositories
2022-10-14 20:59:16 +0100 -- Cleaning up /var/opt/gitlab/backups/uploads.tar.gz
2022-10-14 20:59:16 +0100 -- Cleaning up /var/opt/gitlab/backups/builds.tar.gz
2022-10-14 20:59:16 +0100 -- Cleaning up /var/opt/gitlab/backups/pages.tar.gz
2022-10-14 20:59:16 +0100 -- Cleaning up /var/opt/gitlab/backups/lfs.tar.gz
2022-10-14 20:59:16 +0100 -- Cleaning up /var/opt/gitlab/backups/terraform_state.tar.gz
2022-10-14 20:59:16 +0100 -- Cleaning up /var/opt/gitlab/backups/packages.tar.gz
2022-10-14 20:59:16 +0100 -- Deleting tar staging files ... done
2022-10-14 20:59:16 +0100 -- Deleting backups/tmp ...
2022-10-14 20:59:16 +0100 -- Deleting backups/tmp ... done
2022-10-14 20:59:16 +0100 -- Warning: Your gitlab.rb and gitlab-secrets.json files contain sensitive data
and are not included in this backup. You will need these files to restore a backup.
Please back them up manually.
2022-10-14 20:59:16 +0100 -- Backup 20221014-205835 is done.
tar: Removing leading `/' from member names
____ ___________
/ _/___ ____ / ____/ ___/
/ // __ \/ __ \/ /_ \__ \
_/ // /_/ / / / / __/ ___/ /
/___/\____/_/ /_/_/ /____/ v0.3.0
Directory of ion://gitlab-local/gitlab-backups/
gitlab-backups/20221014-205835_gitlab_backup.tar 14/10/2022 20:59:30
gitlab-backups/20221014-205835_gitlab_config.tar 14/10/2022 20:59:34
gitlab-backups/gitlab-config.tar 14/10/2022 20:40:17
gitlab-backups/ionfs-example_gitlab_backup.tar 14/10/2022 20:40:12

Wrapping up

In this tutorial we've covered some background on the self-hosted GitLab backups, how to create them, and how to upload them to IBC S6 with the IonFS CLI. Finally we wrapped the steps in a simple bash script to allow the process to be automated.

All the steps covered in this tutorial, and the backup script, are currently used by Ionburst Cloud to backup and protect our internal GitLab instance. To keep up with the latest developments on the backup script, please checkout out our examples repository on GitHub.

· 7 min read
Iain Sutherland

If you’re contemplating the option of adapting existing .NET code that uses Amazon S3 to instead use IBC S6 for storage, one question that might be at the forefront of your mind is, well just how difficult will that be?

Speaking as a fairly experienced .NET developer I would reply, actually very easy. And I’ll demonstrate.

Prerequisites

To get started, you'll need appropriate credentials. For this tutorial, I have both AWS SDK credentials and Ionburst Cloud credentials set up on my PC. If you are already familiar with AWS credentials then you may already have a .aws/credentials credentials file in your home directory. For this example, we'll be using credentials setup in the default profile.

The Ionburst Cloud SDKs can also use a credentials file in your home directory, .ionburst/credentials and, like AWS, can also use environment variables. The Ionburst Cloud .NET SDK documentation gives further detail on how the SDK can be configured.

The starting position

For this tutorial, we'll use a simple program that uploads a file called image.jpg to an S3 bucket, fetches the object from the bucket; storing it locally as image_download.jpg, before removing the object from the bucket.

Our simple program:

using System;
using System.IO;
using System.Threading.Tasks;
using Amazon.S3;
using Amazon.S3.Model;
namespace SimpleUploader
{
class Program
{
static async Task Main(string[] args)
{
StorageInterface storage = new(new AmazonS3Client());
if (await storage.UploadFile("image.jpg"))
{
Console.WriteLine("Uploaded file");
if (await storage.FetchFile("image.jpg"))
{
Console.WriteLine("Fetched file");
if (await storage.RemoveFile("image.jpg"))
{
Console.WriteLine("Removed file");
}
}
}
}
}
public class StorageInterface
{
private IAmazonS3 _storage;
private const string BUCKET_NAME = "ibc-example816";
public StorageInterface(IAmazonS3 storage)
{
_storage = storage;
}
public async Task<bool> UploadFile(string filename)
{
bool result = false;
using (FileStream inputStream = new FileStream(filename, FileMode.Open))
{
PutObjectRequest putRequest = new()
{
BucketName = BUCKET_NAME,
Key = filename,
ContentType = "application/octet-stream",
InputStream = inputStream
};
PutObjectResponse uploadResponse = await _storage.PutObjectAsync(putRequest);
if (uploadResponse.HttpStatusCode == System.Net.HttpStatusCode.OK)
{
result = true;
}
}
return await Task.FromResult(result);
}
public async Task<bool> FetchFile(string filename)
{
bool result = false;
GetObjectRequest getRequest = new()
{
BucketName = BUCKET_NAME,
Key = filename
};
GetObjectResponse fetchResponse = await _storage.GetObjectAsync(getRequest);
if (fetchResponse.HttpStatusCode == System.Net.HttpStatusCode.OK)
{
string outputFile = $"{Path.GetFileNameWithoutExtension(filename)}_download{Path.GetExtension(filename)}";
using (var fileStream = File.Create(outputFile))
{
fetchResponse.ResponseStream.CopyTo(fileStream);
}
result = true;
}
return await Task.FromResult(result);
}
public async Task<bool> RemoveFile(string filename)
{
bool result = false;
DeleteObjectRequest deleteRequest = new()
{
BucketName = BUCKET_NAME,
Key = filename
};
DeleteObjectResponse deleteResponse = await _storage.DeleteObjectAsync(deleteRequest);
if (deleteResponse.HttpStatusCode == System.Net.HttpStatusCode.NoContent)
{
result = true;
}
return await Task.FromResult(result);
}
}
}

The conversion - Configuration

From here, it's simply a case of changing this code to use IBC S6 as the storage element. We'll do this by replacing the AWSSDK.S3 NuGet package with the Ionburst.SDK package.

Once Ionburst.SDK has been added to the project, we can update the using statements, so:

using Amazon.S3;
using Amazon.S3.Model;

becomes

using Ionburst.SDK;
using Ionburst.SDK.Model;

Now we can change the type of our _storage variable and the constructor for our StorageInterface class:

public class StorageInterface
{
private IAmazonS3 _storage;
private const string BUCKET_NAME = "ibc-example;
public StorageInterface(IAmazonS3 storage)
{
_storage = storage;
}

becomes:

public class StorageInterface
{
private IonburstClient _storage;
private const string BUCKET_NAME = "ibc-example";
public StorageInterface(IonburstClient storage)
{
_storage = storage;
}

We then change the instantiation of our StorageInterface class from:

StorageInterface storage = new(new AmazonS3Client());

to

StorageInterface storage = new(new IonburstClient());

Finally, since we won’t be using the AWS credentials file, it is necessary to add an Ionburst section to the project appsettings.json file to define a profile that exists in the Ionburst credentials file:

{
"Ionburst": {
"Profile": "example"
}
}

At this point, there’s going to be some compilation errors to fix and it will be mostly be a case of renaming things. The Ionburst Cloud .NET SDK reference (/sdk/dotnet/) can provide the details of the names you need.

The rest - File operations

If you’re familiar with the Amazon S3 .NET SDK, or indeed other Amazon .NET SDKs, then you know that they follow the same pattern of create and populate a request object; pass that request object as the argument to a function; and receive a response object from the function.

Ionburst.SDK follows the same pattern, it’s just that some names and attributes are different.

The FetchFile and RemoveFile functions are easy to fix, and the fixes are very similar.

The request objects have the same name as their S3 counterparts, but there is no Bucket attribute in the request objects, and Key becomes Particle.

The functions are just GetAsync and DeleteAsync and the response objects become GetObjectResult and DeleteObjectResult.

The StatusCode in the response object is also just an integer instead of a System.Net.HttpStatusCode type. The stream attribute in GetObjectResult is DataStream.

After all that, the functions look like this:

public async Task<bool> FetchFile(string filename)
{
bool result = false;
GetObjectRequest getRequest = new()
{
Particle = filename
};
GetObjectResult fetchResponse = await _storage.GetAsync(getRequest);
if (fetchResponse.StatusCode == 200)
{
string outputFile = $"{Path.GetFileNameWithoutExtension(filename)}_download{Path.GetExtension(filename)}";
using (var fileStream = File.Create(outputFile))
{
fetchResponse.DataStream.Seek(0, SeekOrigin.Begin);
fetchResponse.DataStream.CopyTo(fileStream);
}
result = true;
}
return await Task.FromResult(result);
}
public async Task<bool> RemoveFile(string filename)
{
bool result = false;
DeleteObjectRequest deleteRequest = new()
{
Particle = filename
};
DeleteObjectResult deleteResponse = await _storage.DeleteAsync(deleteRequest);
if (deleteResponse.StatusCode == 200)
{
result = true;
}
return await Task.FromResult(result);
}

The UploadFile function undergoes a similar change. The function call is just PutAsync and the response object is PutObjectResult and the same attribute name changes apply to the request object. Minimally, the request object can be:

PutObjectRequest putRequest = new()
{
Particle = filename,
DataStream = inputStream
};

This will just use the default classification set up for the Ionburst Cloud region selected. Resulting in a function that looks like:

public async Task<bool> UploadFile(string filename)
{
bool result = false;
using (FileStream inputStream = new FileStream(filename, FileMode.Open))
{
PutObjectRequest putRequest = new()
{
Particle = filename,
DataStream = inputStream
};
PutObjectResult uploadResponse = await _storage.PutAsync(putRequest);
if (uploadResponse.StatusCode == 200)
{
result = true;
}
}
return await Task.FromResult(result);
}

From here, we are left with a forlornly unused line that can be removed, as Ionburst Cloud doesn't use the concept of a bucket:

private const string BUCKET_NAME = "ibc-example";

Wrapping up

In this tutorial, we've adapted a simple .NET application using to Amazon S3, to instead use IBC S6 as its storage layer. The conversion itself was quick and easy, taking no longer than a few minutes coding time.

The full converted code can be found below:

using System;
using System.IO;
using System.Threading.Tasks;
using System.Linq;
using Ionburst.SDK;
using Ionburst.SDK.Model;
namespace SimpleUploader
{
class Program
{
static async Task Main(string[] args)
{
StorageInterface storage = new(new IonburstClient());
if (await storage.UploadFile("image.jpg"))
{
Console.WriteLine("Uploaded file");
if (await storage.FetchFile("image.jpg"))
{
Console.WriteLine("Fetched file");
if (await storage.RemoveFile("image.jpg"))
{
Console.WriteLine("Removed file");
}
}
}
}
}
public class StorageInterface
{
private IonburstClient _storage;
private const string BUCKET_NAME = "ibc-example";
public StorageInterface(IonburstClient storage)
{
_storage = storage;
}
public async Task<bool> UploadFile(string filename)
{
bool result = false;
using (FileStream inputStream = new FileStream(filename, FileMode.Open))
{
PutObjectRequest putRequest = new()
{
Particle = filename,
DataStream = inputStream,
};
PutObjectResult uploadResponse = await _storage.PutAsync(putRequest);
if (uploadResponse.StatusCode == 200)
{
result = true;
}
}
return await Task.FromResult(result);
}
public async Task<bool> FetchFile(string filename)
{
bool result = false;
GetObjectRequest getRequest = new()
{
Particle = filename
};
GetObjectResult fetchResponse = await _storage.GetAsync(getRequest);
if (fetchResponse.StatusCode == 200)
{
string outputFile = $"{Path.GetFileNameWithoutExtension(filename)}_download{Path.GetExtension(filename)}";
using (var fileStream = File.Create(outputFile))
{
fetchResponse.DataStream.Seek(0, SeekOrigin.Begin);
fetchResponse.DataStream.CopyTo(fileStream);
}
result = true;
}
return await Task.FromResult(result);
}
public async Task<bool> RemoveFile(string filename)
{
bool result = false;
DeleteObjectRequest deleteRequest = new()
{
Particle = filename
};
DeleteObjectResult deleteResponse = await _storage.DeleteAsync(deleteRequest);
if (deleteResponse.StatusCode == 200)
{
result = true;
}
return await Task.FromResult(result);
}
}
}

· 3 min read
Josh Fraser

Overview

An IBC S6 API transaction currently supports a maximum upload size of 50MB. To upload objects larger than this limit, objects have to be split into 50MB chunks and uploaded individually, placing an implementation and management overhead on developers and client tools to track and manage these chunks.

To make it easier for developers and client tools to upload large objects to IBC S6, we've developed what we call IBC S6 SDK manifests. The SDK manifest is comprised of two main parts:

  • An implementation within each Ionburst Cloud SDK to chunk large objects and manage the upload of each chunk to IBC S6
  • A metadata object that tracks important information about each chunk; their IDs, checksum, ordinality etc.

To keep the manifest implementation simple, and to ensure the amount of information held by IBC S6 about a given object is minimised, this manifest metadata object is also stored within IBC S6 using the external reference (object ID) of the request.

This allows the chunks comprising a large object to be tracked, while avoiding any overhead on the client to track and manage them. It also means that to retrieve a large object, the external reference can be passed to the SDK manifest function, which will in turn retrieve and reconstruct the chunks.

The SDK manifests feature is similar to the multipart upload concept used by object stores like Amazon S3.

Getting Started

In this tutorial we will provide examples and code snippets of how to use the new manifest feature to upload large objects to IBC S6:

  1. Uploading large objects with ioncli
  2. Uploading large objects with the Ionburst Cloud Go SDK

ioncli

In this example, we will upload our large object, my-large-file.png to IBC S6 using the ioncli mput command.

Uploading my-large-file.png with ioncli:

ioncli --profile ioncli-example put manifest-example my-large-file.png

Example output:

[hello@ioncli-example ~]$ ls -lah my-large-file.png
-rw-rw-r--. 1 hello hello 125M 13 Sep 09:39 my-large-file.png
[hello@ioncli-example ~]$ ioncli --profile ioncli-example mput manifest-example my-large-file.png
Split to: fed44c96-f457-4e98-829c-b6809ec26e42
Split to: ac45ec73-ac16-4eaf-8251-79b2e1561a56
Split to: a41336ee-de91-474f-ad18-f5c36fe28060

Go SDK

The following example program shows how the Ionburst Cloud Go SDK PutManifest methods can be used:

package main
import (
"fmt"
"gitlab.com/ionburst/ionburst-sdk-go"
"os"
)
func main() {
client, err := ionburst.NewClient()
if err != nil {
fmt.Println(err)
}
ioReader, _ := os.Open("my-large-file.png")
err = client.PutManifest("manifest-example", ioReader, "")
if err != nil {
fmt.Println(err)
}
err = client.Head("manifest-example")
if err != nil {
fmt.Println(err)
} else {
fmt.Printf("Checked: %s\n", "manifest-example")
}
size, err := client.HeadWithLen("manifest-example")
if err != nil {
fmt.Println(err)
} else {
fmt.Printf("Size: %d\n", size)
}
}

Example output:

[hello@example head]$ go run main.go
Split to: 45202923-bb8f-4775-9d06-f7d69c50e883
Split to: de109e03-de07-46d5-a900-76bcaf612d81
Split to: 97301ce9-8645-431b-b9a9-d2f6bab7c2a9
Checked: manifest-example
Size: 498

· 4 min read
Josh Fraser

Overview

As a secure object storage service, IBC S6 was designed to be integrated into both new and existing applications. Firefox Send is an open-source, secure file-sharing service that integrates cloud object stores like Amazon S3 and Google Cloud Storage for its backend storage.

We've integrated IBC S6 with Send to provide Ionburst Cloud customers the ability to easily and safely share files, while also providing an integration example for our Node.js SDK.

Shared Responsibility Model Breakdown

Customer Responsibility

  • You, the customer, are responsible for the secure management of the Ionburst Cloud credentials used by Send to connect with IBC S6.
  • You, the customer, are responsible for the security and administration of the infrastructure running the Send service.

Ionburst Cloud Responsibility

  • We are responsible for the security of all files stored in IBC S6 through the Send integration.
  • We are responsible for the underlying security and availability of the Ionburst Cloud platform.

Getting Started

In this tutorial we will cover:

  1. Setting up Send with IBC S6.
  2. Working Send and IBC S6 in development mode.
  3. Preparing Send for production.

1. Setting up Send with IBC S6

First, we need to clone the Ionburst Cloud Send project, available here.

We then need to install the Node.js dependencies:

git clone https://github.com/ionburstcloud/send.git
cd send
npm install

Once the dependencies are installed, open the Send project in your preferred IDE or editor.

2. Working Send and IBC S6 in development mode

To try out Send with IBC S6 in the Node.js development mode, we need to complete two configuration steps:

  • Edit the Ionburst SDK config.json file, adding our credentials profile and Ionburst Cloud API endpoint
  • Configure the Send backend to use IBC S6 as its backend storage
  • Start a Redis container to handle Send metadata (optional)

The config.json file can be found at the root of the Send project:

{
"Ionburst": {
"Profile": "send",
"IonburstUri": "https://api.eu-west-1.ionburst.cloud/",
"TraceCredentialsFile": "OFF"
}
}

If you have an existing Ionburst credentials profile, it can be added to the Profile key. Otherwise, add a new profile to the Ionburst credentials file with the name send.

IBC S6 can now be enabled for the Send project by editing the config.js file, found in the server directory. Update the ionburst property's default value to true to enable IBC S6 as the backend storage:

ionburst: {
format: String,
default: 'true'
},

If looking to use Redis for metadata while in development mode, you can also update the redis_host property found on line 91 from localhost to 127.0.0.1 (or the address of your Redis host). This step is optional, as Send will use its own memory store for metadata if the property is left as localhost.

To start a Redis container, the following can be used:

docker run --name send-redis -p 127.0.0.1:6379:6379/tcp -tid redis:latest

Once configured, the Send project can be launched and will be available on http://localhost:8080

npm start

3. Preparing Send for production

Preparing Send for production is a matter of reviewing and updating the config.js file, found in the server directory. Configuration items to consider are:

  • redis-host on line 91 - it's recommended to use Redis when running Send in production, rather than the built-in memory store. Update this to the address of your Redis node.
  • listen-port on line 136 - Send will run on port 1443 by default in production. It's recommended to run Send behind a load balancer or reverse proxy.
  • base-url on line 167 - change this to the url your Send instance will be available on.

Once happy with the config, a production build of Send can be created:

npm run build

If running Redis on the same host, you can start the container with:

docker run --name send-redis -p 127.0.0.1:6379:6379/tcp -tid redis:latest

You can now start the production build of Send with:

npm run prod

The Send project production build will now be available on http://localhost:1443

Conclusion

We've covered how to get the Send project up and running with IBC S6, both in development mode, and to prepare it for production deployment. If you have further questions on deploying Send with IBC S6, or integrating Ionburst SDKs with new or existing applications, drop us a line in our community Slack.

· 8 min read
Josh Fraser

Overview

The IonFS CLI provides a set of tools to manage objects and files stored in IBC S6 as if it were a remote filesystem. While the data is stored within IBC S6, the metadata is stored in a customer-controlled metadata repository.

Anyone that has been granted access to this repository, and the appropriate Ionburst Cloud Platform credentials, can interact with the stored files or objects.

To get up and running quickly, we will be using the newly released IonFS CLI local metadata repository functionality.

Shared Responsibility Model Breakdown

Customer Responsibility

  • You, the customer, are responsible for the secure management of the Ionburst Cloud credentials used by ionfs.
  • You, the customer, are responsible for the security of ionfs metadata repositories and the metadata stored in them.

Ionburst Cloud Responsibility

  • We are responsible for the security of all data stored in IBC S6 using ionfs.
  • We are responsible for the underlying security and availability of the Ionburst Cloud platform.

Getting Started

In this tutorial we will cover:

  1. Setting up ionfs.
  2. Working with ionfs metadata repositories.
  3. Listing IBC classifications with ionfs.
  4. Working with ionfs directories.
  5. Managing files stored on IBC S6 with ionfs.

Basic Usage

ionfs allows us to do the following:

  • List configured metadata repositories.
  • List available IBC classifications.
  • Create, list and delete ionfs directories.
  • Upload, download and delete data from IBC S6.

1. Setting up

ionfs makes use of metadata repositories, or repos, to track the objects and files that have been secured by IBC S6. Metadata repos are specified in the configuration file stored under ~/.ionfs/appsettings.json.

For this tutorial, we are going to create a new local directory to use for ionfs metadata, along with the ~/.ionfs directory used to store our configuration file.

mkdir ~/local-ionfs
mkdir ~/.ionfs

We can now set up our ionfs configuration file. First, add a new file to our newly created .ionfs directory.

For MacOS and Linux users:

touch ~/.ionfs/appsettings.json

For Windows users:

New-Item ~/.ionfs/appsettings.json -type file

Open this file in your text editor of choice, and add the following:

{
"IonFS": {
"MaxSize": "50000000",
"Verbose": "false",
"DefaultClassification": "Restricted",
"Repositories": [
{
"Name": "local-ionfs",
"Usage": "Data",
"Class": "Ionburst.Apps.IonFS.Repo.LocalFS.MetadataLocalFS",
"Assembly": "Ionburst.Apps.IonFS.Repo.LocalFS",
"DataStore": "/Users/username/local-ionfs"
},
],
"DefaultRepository": "local-ionfs"
},
"Ionburst": {
"Profile": "example",
"TraceCredentialsFile": "OFF"
}
}

Key points to note:

  • the DataStore entry references the local directory we've created for metadata (remember to change the username), but it cannot use relative paths, i.e:
    • for MacOS: /Users/username/local-ionfs
    • for Linux: /home/username/local-ionfs
    • for Windows: /
  • the Ionburst section relates to the Ionburst SDK credentials file. If you have an existing profile, you can add it here.

If you do not have an existing Ionburst credentials file, one can be created with the following:

For MacOS and Linux users:

mkdir ~/.ionburst
touch ~/.ionburst/credentials

For Windows users:

mkdir ~/.ionburst
New-Item ~/.ionburst/credentials -type file

Open this file in your text editor of choice, and add the following (remember to add your Ionburst Cloud API credentials here):

[example]
ionburst_id=your-ionburst-id
ionburst_key=your-ionburst-key

2. Metadata Repos

Now that we have ionfs setup, we can now start working with our metadata repo. To list the configured repos, the following ionfs command can be used:

ionfs repos

An example output would look like:

[hello@ionfs-example ~]$ ionfs repos
____ ___________
/ _/___ ____ / ____/ ___/
/ // __ \/ __ \/ /_ \__ \
_/ // /_/ / / / / __/ ___/ /
/___/\____/_/ /_/_/ /____/
Available Repositories (*default):
* [d] ion://local-ionfs/ (Ionburst.Apps.IonFS.Repo.LocalFS.MetadataLocalFS)

3. Classifications

Data can be secured by Ionburst Cloud according to available security policies. ionfs can be used to view the policies currently available to an Ionburst Cloud party.

To list available policies, the following can be used:

ionfs policy

An example output would look like:

[hello@ionfs-example ~]$ ionfs policy
____ ___________
/ _/___ ____ / ____/ ___/
/ // __ \/ __ \/ /_ \__ \
_/ // /_/ / / / / __/ ___/ /
/___/\____/_/ /_/_/ /____/
Available Classifications:
2:Restricted

4. Directories

Files and objects secured by IBC S6 through ionfs can be organised within its repo using a typical directory structure.

List directories

To list available directories within a repo, the following can be used:

ionfs list ion://local-ionfs

As we marked the local-ionfs repo as the default, we can omit the name as it will be treated as the root.

ionfs list ion://

An example output would look like:

[hello@ionfs-example ~]$ ionfs list
____ ___________
/ _/___ ____ / ____/ ___/
/ // __ \/ __ \/ /_ \__ \
_/ // /_/ / / / / __/ ___/ /
/___/\____/_/ /_/_/ /____/
Directory of ion://local-ionfs/
d example/

By default, this will list the contents of the repo's root directory. To list a specific directory, the following can be used:

ionfs list ion://example

An example output would look like:

[hello@ionfs-example ~]$ ionfs list ion://example
____ ___________
/ _/___ ____ / ____/ ___/
/ // __ \/ __ \/ /_ \__ \
_/ // /_/ / / / / __/ ___/ /
/___/\____/_/ /_/_/ /____/
Directory of ion://local-ionfs/example/
Remote directory is empty

Create a directory

To create a new directory within a repo, the following can be used:

ionfs mkdir ion://new-directory

An example output would look like:

[hello@ionfs-example ~]$ ionfs mkdir ion://new-directory
[hello@ionfs-example ~]$ ionfs list
____ ___________
/ _/___ ____ / ____/ ___/
/ // __ \/ __ \/ /_ \__ \
_/ // /_/ / / / / __/ ___/ /
/___/\____/_/ /_/_/ /____/
Directory of ion://local-ionfs/
d example/
d new-directory/

Delete a directory

To remove a directory within a repo, the following can be used:

ionfs rmdir ion://new-directory

An example output would look like:

[hello@ionfs-example ~]$ ionfs rmdir ion://new-directory
[hello@ionfs-example ~]$ ionfs list
____ ___________
/ _/___ ____ / ____/ ___/
/ // __ \/ __ \/ /_ \__ \
_/ // /_/ / / / / __/ ___/ /
/___/\____/_/ /_/_/ /____/
Directory of ion://local-ionfs/
d example/

5. Files

Finally, and most importantly we can now look at uploading (Put), downloading (Get) and deleting data from IBC S6 using ionfs. In these examples, we'll use a file called my-file.txt.

First, we need to create my-file.txt:

echo "We may guard your data, but we'll never take its freedom" > my-file.txt

Put

To upload a file to Ionburst Cloud with ionfs, the following can be used:

ionfs put my-file.txt ion://

An example output would look like:

[hello@ionfs-example ~]$ ionfs put my-file.txt ion://
[hello@ionfs-example ~]$ ionfs list
____ ___________
/ _/___ ____ / ____/ ___/
/ // __ \/ __ \/ /_ \__ \
_/ // /_/ / / / / __/ ___/ /
/___/\____/_/ /_/_/ /____/
Directory of ion://local-ionfs/
d example/
my-file.txt 23/08/2022 13:49:51

To upload data to a specific directory within your repo, use the following:

ionfs put my-file.txt ion://example

An example output would look like:

[hello@ionfs-example ~]$ ionfs put my-file.txt ion://example
[hello@ionfs-example ~]$ ionfs list ion://example
____ ___________
/ _/___ ____ / ____/ ___/
/ // __ \/ __ \/ /_ \__ \
_/ // /_/ / / / / __/ ___/ /
/___/\____/_/ /_/_/ /____/
Directory of ion://local-ionfs/example/
example/my-file.txt 23/08/2022 13:50:23

Get

To retrieve a file with ionfs, use the following:

ionfs get ion://example/my-file.txt

An example output would look like:

[hello@ionfs-example ~]$ rm my-file.txt
[hello@ionfs-example ~]$ ionfs get ion://example/my-file.txt
[hello@ionfs-example ~]$ ls
my-file.txt
[hello@ionfs-example ~]$ cat my-file.txt
We may guard your data, but we'll never take its freedom

By default, this will download the file from IBC S6 to the current directory, with the name used in ionfs. To download to a specific local directory, or to download to a different filename, use the following:

ionfs get ion://example/my-file.txt my-file-2.txt

An example output would look like:

[hello@ionfs-example ~]$ ionfs get ion://example/my-file.txt my-file-2.txt
[hello@ionfs-example ~]$ ls
my-file.txt my-file-2.txt
[hello@ionfs-example ~]$ cat my-file-2.txt
We may guard your data, but we'll never take its freedom

Delete

To delete a file from the ionfs repo and from IBC S6, the following can be used:

ionfs del ion://example/my-file.txt

An example output would look like:

[hello@ionfs-example ~]$ ionfs del ion://example/my-file.txt
[hello@ionfs-example ~]$ ionfs list ion://example
____ ___________
/ _/___ ____ / ____/ ___/
/ // __ \/ __ \/ /_ \__ \
_/ // /_/ / / / / __/ ___/ /
/___/\____/_/ /_/_/ /____/
Directory of ion://local-ionfs/example/
Remote directory is empty

Conclusion

You should now be able to perform basic file operations on IBC S6 with ionfs. If you're interested in learning more about the IonFS CLI, please see the Ionburst Cloud docs.

· 3 min read
Josh Fraser

Overview

The Ionburst Cloud API HEAD method has been added to allow IBC S6 objects and IBC NKV secrets to be verified after upload, or queried to return information.

A HEAD request is functionally similar to a GET request; it is authenticated and requires the external reference of the object or secret to be checked. Instead of returning the specified object or secret, the HEAD request returns a status code and a response header with the size of the stored object or secret.

For full details of the HEAD method, please see the API docs for IBC S6, and IBC NKV.

Getting Started

In this tutorial we will provide examples and code snippets of how to use the new HEAD method:

  1. Using the HEAD method with ioncli
  2. Using the HEAD method with the Ionburst Cloud Go SDK

ioncli

In this example, we will upload a file, my-file.txt to IBC S6 using ioncli, then verify its size with the ioncli head command.

Creating my-file.txt:

echo "We may guard your data, but we'll never take its freedom" > my-file.txt

Uploading my-file.txt with ioncli:

ioncli --profile ioncli-example put head-example my-file.txt

Checking my-file.txt with ioncli:

ioncli --profile ioncli-example head head-example

Example output:

[hello@ioncli-example ~]$ echo "We may guard your data, but we'll never take its freedom" > my-file.txt
[hello@ioncli-example ~]$ ls -lah my-file.txt
-rw-rw-r--. 1 hello hello 57B Sep 04 13:37 my-file.txt
[hello@ioncli-example ~]$ ioncli --profile default head head-example
Size: 57

Go SDK

The following example program shows how the Ionburst Cloud Go SDK Head and HeadWithLen methods can be used:

package main
import (
"fmt"
"gitlab.com/ionburst/ionburst-sdk-go"
"os"
)
func main() {
client, err := ionburst.NewClient()
if err != nil {
fmt.Println(err)
}
ioReader, _ := os.Open("my-file.txt")
err = client.Put("head-example", ioReader, "")
if err != nil {
fmt.Println(err)
}
err = client.Head("head-example")
if err != nil {
fmt.Println(err)
} else {
fmt.Printf("Checked: %s\n", "head-example")
}
size, err := client.HeadWithLen("head-example")
if err != nil {
fmt.Println(err)
} else {
fmt.Printf("Size: %d\n", size)
}
}

Example output:

[hello@example head]$ go run main.go
Checked: head-example
Size: 57

· 7 min read
Josh Fraser

Overview

The IonFS Command Line Interface provides a set of tools to manage data stored by Ionburst Cloud S6 as if it were a remote filesystem. While the IonFS CLI stores files within Ionburst Cloud S6, the metadata is stored in a customer-owned metadata repository.

Anyone that has been granted access to this repository, and the appropriate Ionburst Cloud credentials, can interact with the stored data.

For this tutorial, we will be using Amazon S3 as the ionfs metadata repository.

Shared Responsibility Model Breakdown

Customer Responsibility

  • You, the customer, are responsible for the secure management of the Ionburst Cloud credentials used by ionfs.
  • You, the customer, are responsible for the security of ionfs metadata repositories and the metadata stored in them.

Ionburst Cloud Responsibility

  • We are responsible for the security of all data stored in Ionburst Cloud S6 using ionfs.
  • We are responsible for the underlying security and availability of the Ionburst Cloud platform.

Getting Started

In this tutorial we will cover:

  1. Working with ionfs metadata repositories.
  2. Listing IBC classifications with ionfs.
  3. Working with ionfs directories.
  4. Managing files with ionfs.

Basic Usage

ionfs allows us to do the following:

  • List configured metadata repositories.
  • List available IBC classifications.
  • Create, list and delete ionfs directories.
  • Upload, download and delete data from IBC.

1. Metadata Repositories

ionfs makes use of metadata repositories, or repos, to track data that has been secured by Ionburst Cloud S6. Metadata repos are specified in the configuration file stored under ~/.ionfs/appsettings.json.

To list the configured repos, the following ionfs command can be used:

ionfs repos

An example output would look like:

[hello@ionfs-example ~]$ ionfs repos
____ ___________
/ _/___ ____ / ____/ ___/
/ // __ \/ __ \/ /_ \__ \
_/ // /_/ / / / / __/ ___/ /
/___/\____/_/ /_/_/ /____/
Available Repositories (*default):
* [d] ion://s3-example-ionfs/ (Ionburst.Apps.IonFS.Repo.S3.MetadataS3)

2. Classifications

Data can be secured by Ionburst Cloud according to available security policies. ionfs can be used to view the policies currently available to an Ionburst Cloud party.

To list available policies, the following can be used:

ionfs policy

An example output would look like:

[hello@ionfs-example ~]$ ionfs policy
____ ___________
/ _/___ ____ / ____/ ___/
/ // __ \/ __ \/ /_ \__ \
_/ // /_/ / / / / __/ ___/ /
/___/\____/_/ /_/_/ /____/
Available Classifications:
2:Restricted

3. Directories

Data secured by Ionburst Cloud S6 through ionfs can partition its repo using a typical directory structure.

List directories

To list available directories within a repo, the following can be used:

ionfs list

An example output would look like:

[hello@ionfs-example ~]$ ionfs list
____ ___________
/ _/___ ____ / ____/ ___/
/ // __ \/ __ \/ /_ \__ \
_/ // /_/ / / / / __/ ___/ /
/___/\____/_/ /_/_/ /____/
Directory of ion://s3-example-ionfs/
d example/

By default, this will list the contents of the repo's root directory. To list a specific directory, the following can be used:

ionfs list ion://example

An example output would look like:

[hello@ionfs-example ~]$ ionfs list ion://example
____ ___________
/ _/___ ____ / ____/ ___/
/ // __ \/ __ \/ /_ \__ \
_/ // /_/ / / / / __/ ___/ /
/___/\____/_/ /_/_/ /____/
Directory of ion://s3-example-ionfs/example/
Remote directory is empty

Create a directory

To create a new directory within a repo, the following can be used:

ionfs mkdir ion://new-directory

An example output would look like:

[hello@ionfs-example ~]$ ionfs mkdir ion://new-directory
[hello@ionfs-example ~]$ ionfs list
____ ___________
/ _/___ ____ / ____/ ___/
/ // __ \/ __ \/ /_ \__ \
_/ // /_/ / / / / __/ ___/ /
/___/\____/_/ /_/_/ /____/
Directory of ion://s3-example-ionfs/
d example/
d new-directory/

Delete a directory

To remove a directory within a repo, the following can be used:

ionfs rmdir ion://new-directory

An example output would look like:

[hello@ionfs-example ~]$ ionfs rmdir ion://new-directory
[hello@ionfs-example ~]$ ionfs list
____ ___________
/ _/___ ____ / ____/ ___/
/ // __ \/ __ \/ /_ \__ \
_/ // /_/ / / / / __/ ___/ /
/___/\____/_/ /_/_/ /____/
Directory of ion://s3-example-ionfs/
d example/

4. Files

Finally, and most importantly we can now look at uploading (Put), downloading (Get) and deleting data from IBC S6 using ionfs. In these examples, we'll use a file called my-file.txt.

First, we need to create my-file.txt:

echo "We may guard your data, but we'll never take its freedom" > my-file.txt

Put

To upload a file to Ionburst Cloud with ionfs, the following can be used:

ionfs put my-file.txt ion://

An example output would look like:

[hello@ionfs-example ~]$ ionfs put my-file.txt ion://
[hello@ionfs-example ~]$ ionfs list
____ ___________
/ _/___ ____ / ____/ ___/
/ // __ \/ __ \/ /_ \__ \
_/ // /_/ / / / / __/ ___/ /
/___/\____/_/ /_/_/ /____/
Directory of ion://s3-example-ionfs/
d example/
my-file.txt 23/4/2021 13:49:51

To upload data to a specific directory within your repo, use the following:

ionfs put my-file.txt ion://example

An example output would look like:

[hello@ionfs-example ~]$ ionfs put my-file.txt ion://example
[hello@ionfs-example ~]$ ionfs list ion://example
____ ___________
/ _/___ ____ / ____/ ___/
/ // __ \/ __ \/ /_ \__ \
_/ // /_/ / / / / __/ ___/ /
/___/\____/_/ /_/_/ /____/
Directory of ion://s3-example-ionfs/example/
example/my-file.txt 23/4/2021 13:50:23

Get

To retrieve a file with ionfs, use the following:

ionfs get ion://example/my-file.txt

An example output would look like:

[hello@ionfs-example ~]$ rm my-file.txt
[hello@ionfs-example ~]$ ionfs get ion://example/my-file.txt
[hello@ionfs-example ~]$ ls
my-file.txt
[hello@ionfs-example ~]$ cat my-file.txt
We may guard your data, but we'll never take its freedom

By default, this will download the file from Ionburst Cloud S6 to the current directory, with the name used in ionfs. To download to a specific local directory, or to download to a different name, use the following:

ionfs get -n my-file-2.txt ion://example/my-file.txt

An example output would look like:

[hello@ionfs-example ~]$ ionfs get -n my-file-2.txt ion://example/my-file.txt
[hello@ionfs-example ~]$ ls
my-file.txt my-file-2.txt
[hello@ionfs-example ~]$ cat my-file-2.txt
We may guard your data, but we'll never take its freedom

Delete

To delete a file from the ionfs repo and from Ionburst Cloud S6, the following can be used:

ionfs del ion://example/my-file.txt

An example output would look like:

[hello@ionfs-example ~]$ ionfs del ion://example/my-file.txt
[hello@ionfs-example ~]$ ionfs list ion://example
____ ___________
/ _/___ ____ / ____/ ___/
/ // __ \/ __ \/ /_ \__ \
_/ // /_/ / / / / __/ ___/ /
/___/\____/_/ /_/_/ /____/
Directory of ion://s3-example-ionfs/example/
Remote directory is empty

Conclusion

You should now be able to perform basic file operations on Ionburst Cloud S6 using the ionfs tool. If you're interested in learning more about the IonFS CLI, please see the Ionburst Cloud docs.

· 4 min read
Josh Fraser

Overview

ioncli is a simple Command Line Interface tool that allows data to be uploaded, downloaded and deleted from Ionburst Cloud S6. ioncli also allows the listing of available classifications.

The aim of this tutorial is to learn how to setup ioncli, and use it to perform basic operations against the Ionburst Cloud S6 API.

Shared Responsibility Model Breakdown

Customer Responsibility

  • You, the customer, are responsible for the secure management of the Ionburst Cloud credentials used by the ioncli tool.
  • ioncli does not provide any client-side encryption functionality. If this is used in conjunction with the usage of ioncli, it is the reponsibility of the customer to manage.
  • ioncli does not provide any metadata tracking or management for data stored. It is the customer's responsibility to track and record this information.

Ionburst Cloud Responsibility

  • We are responsible for the security of all data stored in Ionburst Cloud S6 using the ioncli tool.
  • We are responsible for the underlying security and availability of the Ionburst Cloud services.

Getting Started

In this tutorial we will cover:

  1. Retrieving the available Ionburst Cloud classifications.
  2. Uploading a file to Ionburst Cloud S6.
  3. Downloading a file from Ionburst Cloud S6.
  4. Deleting a file from Ionburst Cloud S6.

Basic Usage

ioncli provides functionality for the following:

  • Classifications listing.
  • PUT - Uploading data to Ionburst Cloud S6.
  • GET - Downloading data from Ionburst Cloud S6.
  • DELETE - Deleting data from Ionburst Cloud S6.

1. Classifications

Data can be secured by Ionburst Cloud S6 according to available security classifications. ioncli can be used to view the policies currently available to an Ionburst Cloud party.

To list available policies, the following can be used:

ioncli --profile ioncli-example classifications list

The output for this command should look like:

[hello@ioncli-example ~]$ ioncli --profile ioncli-example classifications list
Classifications: 1
Classification
-----------------
Restricted

2. Uploading Data

To upload data to Ionburst Cloud S6 using ioncli, an object ID and file must be supplied. In this example we will upload the file my-file.txt as ID my-ioncli-put.

Creating my-file.txt:

echo "We may guard your data, but we'll never take its freedom" > my-file.txt

Uploading my-file.txt with ioncli:

ioncli --profile ioncli-example put my-ioncli-put my-file.txt

This operation does not return any output on success.

3. Downloading Data

To retrieve data from Ionburst Cloud S6 using ioncli, an object ID and output path must be provided. In this example, we will download the previously uploaded object, my-ioncli-put, to the path/file my-downloaded-file.txt.

Downloading my-ioncli-put:

ioncli --profile ioncli-example get my-ioncli-put my-downloaded-file.txt

This operation does not return any output on success.

We can now view the downloaded file:

cat my-ioncli-put

The output for this command should look like:

[hello@ioncli-example ~]$ cat my-ioncli-put
We may guard your data, but we'll never take its freedom

4. Deleting Data

To delete data from Ionburst Cloud S6 using ioncli, an object ID must be provided. In this example, we will download the previously used object, my-ioncli-put.

Deleting my-ioncli-put:

ioncli --profile ioncli-example delete my-ioncli-put

This operation does not return any output on success.

We can verify the object has been deleted by attempting to download my-ioncli-put:

ioncli --profile ioncli-example get my-ioncli-put my-downloaded-file.txt

The output for this command should look like:

[hello@ioncli-example ~]$ ioncli --profile ioncli-example get my-ioncli-put my-downloaded-file.txt
2021/06/27 18:59:22 Error performing Ionburst API Operation: [GET] https://api.eu-west-1.ionburst.cloud/api/data/my-ioncli-put :: 404 ->

Conclusion

You should now be able to perform basic data operations on Ionburst Cloud S6 using the ioncli tool.

If you're interested in learning more about our more fully-featured command-line tool, IonFS, which features object metadata management and filesystem-like interactions, please see the Ionburst Cloud docs.