saran
saran

Reputation: 191

is it possible to use any external or 3rd party storage system for artifactory?

We were looking for the possibility to configure jFrog Artifactory to work with other object storage systems...

Is it possible? if yes could you please guide ?

Upvotes: 2

Views: 779

Answers (2)

tylwright
tylwright

Reputation: 33

Currently, we're using a two node, HA cluster, of Artifactory 7.x and it points to Hitachi Content Platform (HCP) which is S3 compliant. Works great! We cache 500GB locally and then the rest is on our HCP.

Before implementing this, we double checked with support on regards to the wording of their documentation. They will, in fact, support any S3 compliant system to be used as Artifactory's backend storage provider.

We configured it via these instructions: https://www.jfrog.com/confluence/display/JFROG/S3+Object+Storage

Our binarystore.xml looks like this:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<config version="2">
    <chain>
        <provider type="cache-fs" id="cache-fs-eventual-s3">
            <provider type="sharding-cluster" id="sharding-cluster-eventual-s3">
                <dynamic-provider type="remote" id="remote-s3"/>
                <sub-provider type="eventual-cluster" id="eventual-cluster-s3">
                    <provider type="retry" id="retry-s3">
                        <provider type="s3" id="s3"/>
                    </provider>
                </sub-provider>
            </provider>
        </provider>
    </chain>
    <provider type="cache-fs" id="cache-fs-eventual-s3">
        <maxCacheSize>500000000000</maxCacheSize>
        <cacheProviderDir>cache</cacheProviderDir>
    </provider>
    <provider type="sharding-cluster" id="sharding-cluster-eventual-s3">
        <writeBehavior>crossNetworkStrategy</writeBehavior>
        <readBehavior>crossNetworkStrategy</readBehavior>
        <redundancy>1</redundancy>
        <property name="zones" value="local,remote"/>
    </provider>
    <provider type="eventual-cluster" id="eventual-cluster-s3">
        <zone>local</zone>
    </provider>
    <provider type="retry" id="retry-s3">
        <maxTrys>10</maxTrys>
    </provider>
    <provider type="s3" id="s3">
        <bucketName>Artifactory</bucketName>
        <endpoint>http://namespace.tenant.cluster.com</endpoint>
        <credential>REMOVED</credential>
        <port>80</port>
        <identity>REMOVED</identity>
        <httpsOnly>false</httpsOnly>
        <s3AwsVersion>AWS4-HMAC-SHA256</s3AwsVersion>
        <property name="httpclient.max-connections" value="300"/>
        <property name="s3service.disable-dns-buckets" value="true"/>
    </provider>
    <provider type="remote" id="remote-s3">
        <checkPeriod>15000</checkPeriod>
        <connectionTimeout>5000</connectionTimeout>
        <socketTimeout>30000</socketTimeout>
        <maxConnections>300</maxConnections>
        <connectionRetry>2</connectionRetry>
        <zone>remote</zone>
    </provider>
</config>

I also posted a guide on my personal blog back when we first set it up with Artifactory 6.x. You can find that here: https://www.tyler-wright.com/using-hitachi-content-platform-as-backend-storage-for-jfrogs-artifactory/

Upvotes: 1

galusben
galusben

Reputation: 6382

Artifactory support a very large number of external file storages, including all the large cloud object storages:

  • Amazon S3 compatible storage
  • Google Cloud Storage
  • Azure Blob Storage
  • Any kind of local mounts
  • NFS
  • Database (Full DB)

The file storage is very configurable with multiple layers like cache and sharding.

Please read: https://www.jfrog.com/confluence/display/RTF/Configuring+the+Filestore

Upvotes: 4

Related Questions