Storage SDK
This is the documentation for @spheron/storage
SDK versions >=2.0.0
.
The @spheron/storage
SDK >=2.0.0
requires a token for a Storage organization, and all the usages are bound to the provided organization.
If you have been using the SDK <2.0.0
versions with the Static Site organization and wish to migrate your existing buckets to a new Storage organization to take advantage of the latest features and improvements, we have provided a convenient method for this purpose.
-
To migrate the buckets of your Static Site organization to your new Storage organization, please refer to the migrateStaticSiteOrgToStorage method in our documentation. By following the migration process, you can seamlessly transition to the latest version of the SDK and leverage the enhanced features it offers while maintaining your existing data.
-
In the following section you can checkout the changes between
<2.0.0
and>=2.0.0
.
The Spheron Storage SDK is equipped with multi-chain storage capabilities powered by Spheron.
The package is only meant for Node.js environments and will not work in a browser or frontend apps.
Installation
Using NPM
npm i @spheron/storage
Using Yarn
yarn add @spheron/storage
Usage
To use the Spheron Storage SDK, you have to create an instance of SpheronClient
.
import { SpheronClient, ProtocolEnum } from "@spheron/storage";
const client = new SpheronClient({ token });
The SpheronClient
constructor takes an object that has one property token
.
Follow the instructions in the DOCS (opens in a new tab) to generate this token.
NOTE: When you are creating the tokens, please choose Storage type in the dashboard.
The SpheronClient
instance provides several methods for working with buckets. Here's the list of all the supported methods.
Upload
Upload
Used to upload a file/directory to the specified protocol.
let currentlyUploaded = 0;
const { uploadId, bucketId, protocolLink, dynamicLinks, cid } =
await client.upload(filePath, {
protocol: ProtocolEnum.IPFS,
name,
onUploadInitiated: (uploadId) => {
console.log(`Upload with id ${uploadId} started...`);
},
onChunkUploaded: (uploadedSize, totalSize) => {
currentlyUploaded += uploadedSize;
console.log(`Uploaded ${currentlyUploaded} of ${totalSize} Bytes.`);
},
});
Function upload
takes two parameters:
- filePath - (string) - The path to the file/directory that will be uploaded.
- configuration: An object with the following parameters:
configuration.name
- Represents the name of the bucket on which you are uploading the data.
configuration.protocol
- The protocol on which the data will be uploaded. The supported protocols are [
ARWEAVE
,IPFS
,FILECOIN
].
- The protocol on which the data will be uploaded. The supported protocols are [
configuration.onUploadInitiated
(Optional)- Callback function
(uploadId: string) => void
. The function will be called once, when the upload is initiated, right before the data is uploaded. The function will be called with one parameter,uploadId
, which represents the id of the started upload.
- Callback function
configuration.onChunkUploaded
(Optional)- Callback function
(uploadedSize: number, totalSize: number) => void
. The function will be called multiple times, depending on the upload size. The function will be called each time a chunk is uploaded, with two parameters. The first oneuploadedSize
represents, the size in Bytes for the uploaded chunk. ThetotalSize
represents the total size of the upload in Bytes.
- Callback function
Response
interface UploadRes {
uploadId: string;
bucketId: string;
protocolLink: string;
dynamicLinks: Domain[];
cid: string;
}
NOTE: The CID property will only have value for IPFS and Filecoin uploads.
Encrypt Upload
Used to encrypt a string or a file and upload it to IPFS.
NOTE: Both the encryptUpload
functions require an instance of
LitNodeClient that is already connected. To create an instance, checkout the
@lit-protocol/lit-node-client (opens in a new tab)
package or our
example (opens in a new tab)
const { uploadId, bucketId, protocolLink, dynamicLinks } =
await spheron.encryptUpload({
authSig,
accessControlConditions,
chain,
filePath,
litNodeClient: client,
configuration: {
name: bucketName,
},
});
Parameters of encryptUpload
:
-
authSig - (optional) The authSig of the user. You can use siwe (opens in a new tab) to create it. Checkout the example (opens in a new tab).
-
sessionSigs: - (optional) the session signatures to use to authorize the user with the nodes.
-
accessControlConditions - (optional) The access control conditions that the user must meet to obtain this signed token. This could be possession of an NFT, for example.
-
evmContractConditions - (optional) EVM Smart Contract access control conditions that the user must meet to obtain this signed token. This could be possession of an NFT, for example. This is different than accessControlConditions because accessControlConditions only supports a limited number of contract calls. evmContractConditions supports any contract call.
-
solRpcConditions - (optional) Solana RPC call conditions that the user must meet to obtain this signed token. This could be possession of an NFT, for example.
-
unifiedAccessControlConditions - (optional) An array of unified access control conditions. You may use
AccessControlCondition
,EVMContractCondition
, orSolRpcCondition
objects in this array, but make sure you add a conditionType for each one.NOTE: The function requires either
accessControlConditions
,evmContractConditions
,solRpcConditions
orunifiedAccessControlConditions
. -
chain - The chain name of the chain that this contract is deployed on.
-
string - (optional) The string you wish to encrypt.
-
filePath - (optional) The path to the file you wish to encrypt.
NOTE: The function requires either
string
offilePath
parameter to be passed. -
litNodeClient - An instance of LitNodeClient that is already connected.
-
configuration:
- name - Represents the name of the bucket on which you are uploading the data.
- onUploadInitiated - (optional) Callback function
(uploadId: string) => void
. The function will be called once when the upload is initiated, right before the data is uploaded. The function will be called with one parameter,uploadId
, which represents the id of the started upload. - onChunkUploaded - (optional) Callback function
(uploadedSize: number, totalSize: number) => void
. The function will be called multiple times, depending on the upload size. The function will be called each time a chunk is uploaded, with two parameters. the firstuploadedSize
represents the size in Bytes for the uploaded chunk. ThetotalSize
represents the total size of the upload in Bytes.
Response
interface UploadRes {
uploadId: string;
bucketId: string;
protocolLink: string;
dynamicLinks: Domain[];
cid: string;
}
NOTE: Checkout the
example (opens in a new tab)
on how to use the encryptUpload
function.
Decrypt Upload
Used to decrypt the data of the previously encrypted upload.
NOTE: Both the decryptUpload
functions require an instance of
LitNodeClient that is already connected. To create an instance, checkout the
@lit-protocol/lit-node-client (opens in a new tab)
package or our
example (opens in a new tab)
const decryptedData: Uint8Array = spheron.decryptUpload({
authSig,
ipfsCid,
litNodeClient,
});
Function decryptUpload
parameters:
- authSig - (optional) The authSig of the user. You can use siwe (opens in a new tab) to create it. Checkout the example (opens in a new tab).
- sessionSigs - (optional) the session signatures to use to authorize the user with the nodes.
- ipfsCid - The IPFS cid of the upload that was previously encrypted & uploaded.
- litNodeClient - An instance of LitNodeClient that is already connected
Response
- The function returns an
Uint8Array
, which represents the decrypted data.NOTE: Checkout the example (opens in a new tab) on how to use the
decryptUpload
function.
Create Single Upload Token
Used to create a token that can be used to upload data directly from browser, using the @spheron/browser-upload (opens in a new tab) package. Checkout the browser-upload docs for more information.
const { uploadToken } = await client.createSingleUploadToken({
name: bucketName,
protocol,
maxSize,
});
Function createSingleUploadToken
takes two parameters:
- name - (string) - Represents the name of the bucket on which you are uploading the data.
- protocol - (string) - The protocol on which the data will be uploaded. The supported protocols are [
ARWEAVE
,IPFS
,FILECOIN
]. - maxSize - (number) (optional) - The maximum size of the upload in bytes. If the uploaded data size is exceeded an error will be thrown.
Response
interface TokenRes {
uploadToken: string;
}
Pin A CID on IPFS
Used to pin a CID on Spheron IPFS dedicated node
const { uploadId, bucketId, protocolLink, dynamicLinks } = await client.pinCID({
name,
cid,
inBackground,
});
Function pinCID
takes one parameter:
- configuration: An object with the following parameters:
configuration.name
(string) - Represents the name of the bucket on which you are pinning the CID.configuration.cid
(string) - The CID of the file you want to be pinnedconfiguration.inBackground
(boolean) - (optional) - when set to true, the pinning will processed in the background. Using theuploadId
from the response, the status can be checked via getUpload function.Note: When
inBackground
is set to true,dynamicLinks
will not be included in the response.
Response
interface PinCIDResponse {
uploadId: string;
bucketId: string;
protocolLink: string;
dynamicLinks: string[];
}
Pin CID's on IPFS
Used to pin an array of CID's to the Spheron IPFS node. For each of the provided CID's an upload will be created with status IN_PINNING_QUEUE
. These uploads with status IN_PINNING_QUEUE
will be dequeued and processed as soon as a free parallel upload slot becomes available.
NOTE: It's essential to keep in mind that if you attempt to pin more CIDs than the number of available parallel uploads, your standard uploads may be temporarily affected. They will only resume when a parallel upload slot becomes available.
const response: {
uploadId: string,
cid: string,
}[] = await client.pinCIDs({
name,
cids,
});
Function pinCIDs
takes one parameter:
- configuration: An object with the following parameters:
configuration.name
(string) - Represents the name of the bucket on which you are pinning the CID.configuration.cids
(string[]) - The array of CID's that will be pinned to the Spheron IPFS Node.
Response an array of object with structure
{
uploadId: string; // the id of the upload.
cid: string; // the CID that was sent to be pinned
}
Get CID Pin Status
Used to get the pinning status for the specified CID
.
const pinStatus: PinStatus = await client.getCIDStatus(CID);
Function getCIDStatus
takes just one parameter:
- CID - (string) - The CID of the file
Response
interface PeerData {
peerName: string; // Name of the peer
ipfsPeerId: string; // The peer ID in the IPFS network
ipfsPeerAddresses: string[]; // Array of addresses at which the peer can be reached
status: string; // Status of the pin, such as "pinned" or "unpinned"
timeStamp: string; // Timestamp when the status was last updated
error: string; // Any errors that have occurred, empty string if no errors
attemptCount: number; // Number of attempts made to change the pin status
priorityPin: boolean; // Indicates whether this is a priority pin
}
interface PinStatus {
cid: string; // The Content Identifier (CID) of the file
name: string; // Name associated with the file, can be empty
allocations: string[]; // Array of peers where the file is allocated
origins: string[]; // Array of origins from where the file can be fetched
created: string; // Timestamp when the file was created
metadata: null | any; // Metadata associated with the file, null if none
peerMap: { [key: string]: PeerData }; // Map of peer data, with peer IDs as keys
}
Get Upload
Used to get the upload by its id.
const upload: Upload = await client.getUpload(uploadId);
Function getUpload
takes just one parameter:
- uploadId - (string) - The upload id of the file uploaded using Spheron SDK.
Response
interface UploadedFile {
fileName: string;
fileSize: number;
}
interface Upload {
id: string;
protocolLink: string;
uploadDirectory: UploadedFile[];
status: UploadStatusEnum;
memoryUsed: number;
bucketId: string;
protocol: string;
}
Pin Upload
Used to pin the upload by its id.
const upload: Upload = await client.pinUpload(uploadId);
Function pinUpload
takes just one parameter:
- uploadId - (string) - The upload id.
Response: returns the pinned upload.
Only IPFS and Filecoin uploads with status Unpinned
can be pinned. The storage used will be increased by the upload size after the upload is pinned.
Unpin Upload
Used to unpin the upload by its id.
const upload: Upload = await client.unpinUpload(uploadId);
Function unpinUpload
takes just one parameter:
- uploadId - (string) - The upload id.
Response: returns the unpinned upload.
Only IPFS and Filecoin uploads with status Pinned
can be unpinned. The storage used will be decreased by the upload size after the upload is unpinned.
Organization
Migrate Buckets From Static Site Organization
Used to migrate the Buckets and their Uploads from Static Site organization to Storage organization.
To execute the migration you need to use the token of the Storage organization, and you need to be the owner of both the Storage and the Static Site organizations.
From version 2.0.0
of @spheron/storage
, the SDK will only work with the Storage
organization. Using this method you can migrate the Buckets and Uploads you have
used before. Executing the migration will move the following data from static site
to storage organization:
- Buckets
- Uploads
- Domains
- IPNS Records
After you migrate the data it is not possible to revert it.
const { numberOfBuckets, numberOfUploads } =
await client.migrateStaticSiteOrgToStorage(
staticSiteOrganizationId,
storageOrganizationId
);
Function migrateStaticSiteOrgToStorage
takes two parameter:
- staticSiteOrganizationId - (string) - The id of the static site organization from the the data will be moved
- storageOrganizationId - (string) - The id of the storage organization to which the data will be moved. The token for this organization should be used when initializing the SpheronClient.
Response
interface MigrationResponse {
numberOfBuckets: number; // the number of buckets that were migrated
numberOfUploads: number; // the number of uploads that were migrated
}
Get Organization Usage
Used to get the usage of the current active subscription of the organization.
const organization: UsageWithLimits = await client.getOrganizationUsage(
organizationId
);
Function getOrganizationUsage
takes just one parameter:
- organizationId - (string) - The id of the organization.
Response
interface UsageWithLimits {
used: {
bandwidth: number, // the bytes of bandwidth used for the current subscription
storageArweave: number, // the bytes of used Arweave storage for the current subscription
storageIPFS: number, // the bytes of used IPFS storage
storageFilecoin: number, // the bytes of used Filecoin
domains: number, // the number of domains and subdomain an organization has
hnsDomains: number, // the number of hns domains and hns subdomain an organization has
ensDomains: number, // the number of ens domains an organization has
numberOfRequests: number, // the number of requests the organization has had till now
parallelUploads: number, // the number of uploads are in progress for an organization
imageOptimization: number, // the number of optimized images organization has had till now
ipfsBandwidth: number, // the bytes of IPFS bandwidth used for the current subscription
ipfsNumberOfRequests: // the number of IPFS requests the organization has had till now
};
limit: {
bandwidth: number, // the bandwidth limit for the current subscription
storageArweave: number, // the limit in bytes for the Arweave storage for the current subscription
storageIPFS: number, // the limit in bytes for the IPFS and Filecoin storage for the current subscription
domains: number, // the limit on how many domains and subdomains can a organization have
hnsDomains: number, // the limit on how many hns domains and hns subdomain can a organization have
ensDomains: number, // the limit on how many ens domains a organization have
parallelUploads: number, // the number on how many parallel uploads an organization can have
imageOptimization: number, // the limit on how many optimized images can an organization have
ipfsBandwidth: number; // the IPFS bandwidth limit for the current subscription
};
}
IPNS Records
Get all IPNS records from Bucket
Used to get the IPNS records that were published for the provided bucketId
.
const ipnsRecords: IpnsRecord[] = await client.getBucketIpnsRecords(bucketId);
Function getBucketIpnsRecords
takes just one parameter:
- bucketId - (string) - The id of the bucket.
Response
interface IpnsRecord {
id: string;
ipnsHash: string;
ipnsLink: string;
memoryUsed: number;
bucketId: string;
createdAt: Date;
updatedAt: Date;
}
Get IPNS Record
Used to get the information about the specific IPNS record.
const ipnsRecord: IpnsRecord = await client.getBucketIpnsRecord(
bucketId,
ipnsRecordId
);
Function getBucketIpnsRecord
takes two parameters:
- bucketId - (string) - The id of the bucket.
- ipnsRecordId - (string) - The id of the IPNS record.
Response
interface IpnsRecord {
id: string;
ipnsHash: string;
ipnsLink: string;
memoryUsed: number;
bucketId: string;
createdAt: Date;
updatedAt: Date;
}
Add IPNS record
Used to publish a new IPNS record.
const ipnsRecord: IpnsRecord = await client.addBucketIpnsRecord(
bucketId,
uploadId
);
Function addBucketIpnsRecord
takes two parameters:
- bucketId - (string) - The id of the bucket.
- uploadId - (string) - The id of the upload to which the IPNS record should point to.
Response
interface IpnsRecord {
id: string;
ipnsHash: string;
ipnsLink: string;
memoryUsed: number;
bucketId: string;
createdAt: Date;
updatedAt: Date;
}
Update IPNS Record
Used to update an existing IPNS record of the Bucket.
const ipnsRecord: IpnsRecord = await client.updateBucketIpnsRecord(
bucketId,
ipnsRecordId,
uploadId
);
Function updateBucketIpnsRecord
takes three parameters:
- bucketId - (string) - The id of the bucket.
- ipnsRecordId - (string) - The id of the IPNS record.
- uploadId - (string) - The id of the upload to which the IPNS record should point to.
Response
interface IpnsRecord {
id: string;
ipnsHash: string;
ipnsLink: string;
memoryUsed: number;
bucketId: string;
createdAt: Date;
updatedAt: Date;
}
Delete IPNS Record
Used to delete the IPNS Record of the Bucket.
await client.deleteBucketIpnsRecord(bucketId, ipnsRecordId);
Function deleteBucketIpnsRecord
takes two parameters:
- bucketId - (string) - The id of the bucket.
- ipnsRecordId - (string) - The id of the IPNS record.
Bucket
Get Buckets
Used to get the buckets of the organization. The method supports pagination and filtering by name or bucket state.
const buckets: Bucket[] = await client.getOrganizationBuckets(organizationId, {
skip,
limit,
state,
name,
});
Function getOrganizationBuckets
parameters:
-
organizationId - The id of the organization of the token.
-
options
- skip - (number)
- limit - (number)
- state - (BucketStateEnum) optional - for filtering the buckets by state.
- name - (string) optional - for filtering the buckets by name. The response will include the buckets which names contain the provided name.
Response
enum BucketStateEnum {
MAINTAINED = "MAINTAINED",
ARCHIVED = "ARCHIVED",
}
interface Bucket {
id: string;
name: string;
organizationId: string;
state: BucketStateEnum;
}
Get Bucket Count
Used to get the number of buckets an organization has. The method supports filtering by name or bucket state.
const count: number = await client.getOrganizationBucketCount(organizationId, {
name,
state,
});
Function getOrganizationBucketCount
parameters:
-
organizationId - The id of the organization of the token.
-
options - optional
- state - (BucketStateEnum) optional - for filtering the buckets by state.
- name - (string) optional - for filtering the buckets by name. The response will include the buckets which names contain the provided name.
Get Bucket
Used to get the bucket information for the specified bucketId
.
const bucket: Bucket = await client.getBucket(bucketId);
Function getBucket
takes just one parameter:
- bucketId - (string) - The id of the bucket.
Response
enum BucketStateEnum {
MAINTAINED = "MAINTAINED",
ARCHIVED = "ARCHIVED",
}
interface Bucket {
id: string;
name: string;
organizationId: string;
state: BucketStateEnum;
}
Archive Bucket
Used to archive the Bucket. New uploads cannot be created for an archived bucket.
await client.archiveBucket(bucketId);
Function archiveBucket
takes just one parameter:
- bucketId - (string) - The id of the bucket.
Unarchive Bucket
Used to unarchive the Bucket.
await client.unarchiveBucket(bucketId);
Function unarchiveBucket
takes just one parameter:
- bucketId - (string) - The id of the bucket.
Get Bucket Upload Count
Used to get the number of uploads for the specified bucket.
const count: number = await client.getBucketUploadCount(bucketId);
Function getBucketUploadCount
takes just one parameter:
- bucketId - (string) - The id of the bucket.
Get Bucket Uploads
Used to get the uploads of the bucket. The default value for the skip
is 0. The default value for the limit
is 6.
const bucketUploads: Upload[] = await client.getBucketUploads(bucketId, { skip: number; limit: number; });
Function getBucketUploads
takes two parameters:
- bucketId - (string) - The id of the bucket.
- options - An object with the skip and limit of the bucket to be updated.
Response
enum UploadStatusEnum {
IN_PROGRESS = "InProgress",
IN_PINNING_QUEUE = "InPinningQueue",
CANCELED = "Canceled",
UPLOADED = "Uploaded",
FAILED = "Failed",
TIMED_OUT = "TimedOut",
PINNED = "Pinned",
UNPINNED = "Unpinned",
}
interface UploadedFile {
fileName: string;
fileSize: number;
}
interface Upload {
id: string;
protocolLink: string;
uploadDirectory: UploadedFile[];
status: UploadStatusEnum;
memoryUsed: number;
bucketId: string;
protocol: string;
}
Bucket Domains
Get Bucket Domains
Used to get the domains that are attached to the specified bucketId
.
const bucketDomains: Domain[] = await client.getBucketDomains(bucketId);
Function getBucketDomains
takes just one parameter:
- bucketId - (string) - The id of the bucket.
Response
enum DomainTypeEnum {
DOMAIN = "domain",
SUBDOMAIN = "subdomain",
HANDSHAKE_DOMAIN = "handshake-domain",
HANDSHAKE_SUBDOMAIN = "handshake-subdomain",
ENS_DOMAIN = "ens-domain",
}
interface Domain {
id: string;
name: string;
link: string;
verified: boolean;
bucketId: string;
type: DomainTypeEnum;
}
Get Bucket Domain
Used to get the information about the specific domain.
const bucketDomain: Domain = await client.getBucketDomain(
bucketId,
domainIdentifier
);
Function getBucketDomain
takes two parameters:
- bucketId - (string) - The id of the bucket.
- domainIdentifier - (string) - It can either be the id of the domain or the name of the domain.
Response
enum DomainTypeEnum {
DOMAIN = "domain",
SUBDOMAIN = "subdomain",
HANDSHAKE_DOMAIN = "handshake-domain",
HANDSHAKE_SUBDOMAIN = "handshake-subdomain",
ENS_DOMAIN = "ens-domain",
}
interface Domain {
id: string;
name: string;
link: string;
verified: boolean;
bucketId: string;
type: DomainTypeEnum;
}
Add Bucket Domain
Used to add a new domain to the specified bucket. The link
property needs to have the value of protocolLink
of an existing upload from the specified bucket, or the value of ipnsLink
of an existing ipns record from the specified bucket. After adding a new domain, you will need to setup the record on your DNS provider:
- domain: you should create a A type record with the value
13.248.203.0
and the same name as the domain you have added. - subdomain: you should create a CNAME type record with the value
cname.spheron.io
and the same name as the domain you have added. - handshake-domain: you should create a A type record with the value
ipfs.namebase.io
and@
for name. Also, you should create a TXT type record with thelink
for a value and_contenthash
for a name. - handshake-subdomain: you should create a A type record with the value
ipfs.namebase.io
and the same name as the domain you have added. Also, you should create a TXT type record with thelink
for a value and_contenthash.<name_of_the_domain>
for a name. - ens-domain: you should create a CONTENT type record with the
link
for a value and the same name as the domain you have added.
const bucketDomain: Domain = await client.addBucketDomain(bucketId, {
link,
type,
name,
});
Function addBucketDomain
takes two parameters:
- bucketId - (string) - The id of the bucket.
- bucketDetails - An object with the link, type, and name of the bucket to be added.
After you have setup the record on your DNS provider, then you should call the verifyBucketDomain
method to verify your domain on Spheron. After the domain is verified, the data behind the link will be cached on the Spheron CDN.
Response
enum DomainTypeEnum {
DOMAIN = "domain",
SUBDOMAIN = "subdomain",
HANDSHAKE_DOMAIN = "handshake-domain",
HANDSHAKE_SUBDOMAIN = "handshake-subdomain",
ENS_DOMAIN = "ens-domain",
}
interface Domain {
id: string;
name: string;
link: string;
verified: boolean;
bucketId: string;
type: DomainTypeEnum;
}
Verify Bucket Domain
Used to verify the domain, after which the content behind the domain will be cached on CDN.
const domainStatus: Domain = await client.verifyBucketDomain(
bucketId,
domainIdentifier
);
Function verifyBucketDomain
takes two parameters:
- bucketId - (string) - The id of the bucket.
- domainIdentifier - (string) - It can either be the id of the domain or the name of the domain.
Response
enum DomainTypeEnum {
DOMAIN = "domain",
SUBDOMAIN = "subdomain",
HANDSHAKE_DOMAIN = "handshake-domain",
HANDSHAKE_SUBDOMAIN = "handshake-subdomain",
ENS_DOMAIN = "ens-domain",
}
interface Domain {
id: string;
name: string;
link: string;
verified: boolean;
bucketId: string;
type: DomainTypeEnum;
}
Update Bucket Domain
Used to update an existing domain of the Bucket.
const bucketDomain: Domain = await client.updateBucketDomain(
bucketId,
domainIdentifier,
{
link,
name,
}
);
Function updateBucketDomain
takes three parameters:
- bucketId - (string) - The id of the bucket.
- domainIdentifier - (string) - It can either be the id of the domain or the name of the domain.
- options - An object with the link and name of the bucket to be updated.
Response
enum DomainTypeEnum {
DOMAIN = "domain",
SUBDOMAIN = "subdomain",
HANDSHAKE_DOMAIN = "handshake-domain",
HANDSHAKE_SUBDOMAIN = "handshake-subdomain",
ENS_DOMAIN = "ens-domain",
}
interface Domain {
id: string;
name: string;
link: string;
verified: boolean;
bucketId: string;
type: DomainTypeEnum;
}
Delete Bucket Domain
Used to delete the domain of the Bucket.
await client.deleteBucketDomain(bucketId, domainIdentifier);
Function deleteBucketDomain
takes two parameters:
- bucketId - (string) - The id of the bucket.
- domainIdentifier - (string) - It can either be the id of the domain or the name of the domain.
Get CDN DNS Records
Used to get the values that should be set on your DNS provider for your bucket domains.
const { cdnARecords, cdnCnameRecords } = await client.getCdnDnsRecords();
Function getCdnDnsRecords
doesn't take any parameters.
Response
- cdnARecords - (string) - will contain the DNS value that should be used for domains.
- cdnCnameRecords - (string) - will contain the DNS value that should be used for subdomains.
Get Token Scope
Used to get the scope of the token.
const tokenScope: TokenScope = await client.getTokenScope();
Function getTokenScope
takes no parameters.
Response
interface TokenScope {
user: {
id: string,
username: string,
name: string,
email: string,
};
organizations: {
id: string,
name: string,
username: string,
}[];
}
Storage Utility
Transforming CID from V0 to V1 and vice versa
The package also provides a couple of methods for transforming CID from V0 to V1 and vice verse.
import { ipfs } from "@spheron/storage";
const cid = <CID_VALUE>;
const v1 = ipfs.utils.toV1(cid);
console.log("CID V1", v1);
const v0 = ipfs.utils.toV0(cid);
console.log("CID V0", v0);
Changes between <2.0.0
and >=2.0.0
This section describes the changes between between <2.0.0
and >=2.0.0
versions of the @spheron/storage
sdk.
- The versions
<2.0.0
require a token for Static Site organization. Version starting from2.0.0
require a token for Storage organization. - Changes in methods and interfaces:
- getUpload - The
Upload
interface has changed.buildDirectory
property is renamed touploadDirectory
, and now it is an array of with both the name and the size of the uploaded files. TheUploadStatusEnum
has also changed. - getOrganizationUsage - New properties have been added.
- getBucket - The
Bucket
interface does not include the domains anymore. To fetch the domains of the bucket you can use the getBucketDomains method. - getBucketUploadCount - The method now only returns the total number of uploads.
- getBucketUploads - The
Upload
interface has changed.buildDirectory
property is renamed touploadDirectory
, and now it is an array of with both the name and the size of the uploaded files. TheUploadStatusEnum
has also changed. - IPNS Records are now grouped on the bucket level. Checkout the section for more details.
- getUpload - The