Aspera

 View Only
Expand all | Collapse all

Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

  • 1.  Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Mon February 12, 2024 10:55 AM

    Hello there. I have looked at the existing blog post about API Connect and I don't believe it answers the question, however I stand ready to be corrected.

    I am developing an integration for a client that will require a file located in temporary Lambda storage to be delivered to a third party Aspera destination. I am not in control of that destination. I will only get credentials suitable for manual use in an Aspera Shares session, or in Aspera Client. I would like to have the Lambda function perform the upload.

    How would I go about making that happen? My current plan is to install the Aspera CLI as a Lambda layer - perhaps I have already hit on the best method although there do appear to be many APIs available that might work.

    All the best

    JS



    ------------------------------
    Jeremy Smith
    ------------------------------


  • 2.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Mon February 12, 2024 02:04 PM

    Hi Jeremy,

    Using the Aspera CLI as you mentioned is the way to go.

    See this reference

    https://github.com/IBM/aspera-cli?tab=readme-ov-file#shares-1-sample-commands

    The ascli will make the API call into Shares to get the transfer token from the High Speed Transfer Server and then ascli will start the transfer.

    thanks,

    Ben



    ------------------------------
    BEN FORSYTH
    ------------------------------



  • 3.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Mon February 12, 2024 02:09 PM

    Right, `ascli` can be a way to go.

    Note that there is also a container with `ascli`

    Another possibility is to develop a small code to use the Aspera Transfer SDK, one can find code here: https://github.com/laurent-martin/aspera-api-examples

    The Shares API for transfer is the same as the node api.

    Also, since ascli code is open source, one can be inspired by it .



    ------------------------------
    Laurent Martin
    ------------------------------



  • 4.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Wed February 14, 2024 07:31 AM

    Thank you - would you mind giving me a hint as to which of the Python examples most closely matches my use-case? I will be uploading a few small xml and png files to Aspera Shares I believe. (I know, Aspera is massive overkill for that, but it looks as if it's a requirement of the third party recipient).

    My alternative plan (now I know about the container) is to set up the container image as an Amazon ECS/Fargate task and fire a command line string at it from a Lambda function. I can mount an EFS volume to both Lambda and ECS, so I can make the files accessible to both bits of code.

    Are you aware of this already having been done, and if so did it work?

    Thank you



    ------------------------------
    Jeremy Smith
    ------------------------------



  • 5.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Wed February 14, 2024 08:05 AM
    Edited by Laurent Martin Wed February 14, 2024 08:18 AM

    This sample should work with shares:

    https://github.com/laurent-martin/aspera-api-examples/blob/main/python/src/node.py

    In the configuration, the URL shall be like this:

    https://shares.example.com/node_api

    note the node_api at the end

    Then you also need to download the SDK:

    https://developer.ibm.com/apis/catalog/aspera--aspera-transfer-sdk/Introduction

    or , if you just clone the github repo, I believe it downloads it automatically from: https://ibm.biz/aspera_sdk

    If you want to try the CLI: https://github.com/IBM/aspera-cli, or even using the container of it, the command is:

    ascli shares --url=https://shares.example.com --username=xxx --password=yyy files upload --to-folder=/sharesname file1 file2

    (there are many options to create a config file, encrypt secrets, etc...)

    ------------------------------
    Laurent Martin
    ------------------------------



  • 6.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Fri February 16, 2024 04:23 AM

    Thank you. I now have the container running in Amazon ECS, with a EFS volume mounted to it and also mounted to a Lambda function. I would like to test this by connecting to a Shares location and listing the files and folders already there (to confirm connectivity).

    Please could you tell me the ascli command I will need to run for that? I have looked and CTRL-Fd my way through the readme file and cannot anywhere find how to do this. ALternatively, if there is a less in-depth FAQ style documentation somewhere, please could you let me know where it is and I will be able to self-serve.

    All the best



    ------------------------------
    Jeremy Smith
    ------------------------------



  • 7.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Fri February 16, 2024 04:48 AM

    There is a (short) section for Shares in the manual:

    https://github.com/IBM/aspera-cli?tab=readme-ov-file#shares-1-sample-commands

    As you can see, there is the browse command

    in the ascli shares command do browse /

    You can also populate a configuration file, if you need to execute several commands in a row and do not want to have a too long command line, you can do this:

    ascli conf preset update myshares --url=https://shares.example.com --username=xxx --password=yyy

    ascli conf preset set default shares myshares

    and then you can simply browse with:

    ascli shares br /

    (one can use just the significant prefix to shorten commands)

    The config file is kept in the user's home: $HOME/.aspera/ascli

    If you prefer, you can also use env vars:

    export ASCLI_URL=...

    export ASCLI_USERNAME=..

    etc...

    plenty of options to accomodate various use cases...



    ------------------------------
    Laurent Martin
    ------------------------------



  • 8.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Fri February 16, 2024 10:49 AM

    Hello -

    I am very sorry bugging you about this. I wish I could work it out for myself. I have run the command you hinted at, and there is an error from ascli in the container in all these formats:

    ERROR: Argument: unknown value for command: shares --url=https://xxx.xxx.xx --username=xxx --password=xxx browse /

    ERROR: Argument: unknown value for command: shares --url=https://xxx.xxx.xx --username=xxx --password=xxx browse

    ERROR: Argument: unknown value for command: shares --url=https://xxx.xxx.xx --username=x -xx-password=xxx files browse

    Please can you advise? Once I get this working all I need it to do is send a few tiny files so nothing complicated. Thank you for pointing out the other documentation - I have now found the pdf file which has a helpful index.

    All the best,

    Jeremy Smith



    ------------------------------
    Jeremy Smith
    ------------------------------



  • 9.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Fri February 16, 2024 12:01 PM

    This cmd works for me

    ascli shares --url=https://shares.example.com --username=XXXXXX --password=YYYYYY files browse /

    Can you post the exact cmd (without user/pass/host) you are using



    ------------------------------
    BEN FORSYTH
    ------------------------------



  • 10.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Fri February 16, 2024 01:18 PM

    Hello -

    Thank you for replying. I have managed to get a command to run. The reason it was not working was that in the AWS Console ECS "run task" page there is a section for Container Overrides where the ascli command parameters can be entered. As I am new to Docker I did not realise that this needs to be a comma separated list of params. Once I formatted the command correctly, it worked and ascli continued to run with those params. It's now reporting connection refused on port 443 of the Shares endpoint, but I guess that's something I should take up with the third party.

    So now I think I have everything I need to continue.

    Thank you both very much for your assistance.



    ------------------------------
    Jeremy Smith
    ------------------------------



  • 11.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Wed March 13, 2024 01:31 PM

    I have added an explicit example for Shares in python.

    As one can see it is the same as node api:

    https://github.com/laurent-martin/aspera-api-examples/blob/main/python/src/shares.py



    ------------------------------
    Laurent Martin
    ------------------------------



  • 12.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Thu March 14, 2024 01:15 PM

    Thank you this is very useful.

    I have now managed to implement the Aspera Transfer SDK in a container based Lambda function. It works most of the time however I am experiencing some trouble with the daemon exiting unexpectedly immediately after spawning. The error appears to be this one:

    { "appname": "faspmanagerd", "hostname": "127.0.0.1", "level": "error", "msg": "Error: (Session [Session [id=f90888f6-c252-47a5-872c-b5c986c8a317 transferid=89939f14-6c52-4217-9480-23b8a3595ddf state=Completed]] no stderr on failure) caught error (signal: killed) ", "time": "2024-03-14T16:59:49Z" }

    But I am not issuing a kill command until after the transfer. The function has 4096MB of memory and works other times.

    Any ideas?



    ------------------------------
    Jeremy Smith
    ------------------------------



  • 13.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Thu March 14, 2024 04:39 PM

    Please try allowing your Lamba more memory and see if that resolves the issue.

    I ran a simple ascp transfer test at relatively low speed - 50Mbps and my ascp process is using over 6 MB memory.



    ------------------------------
    BEN FORSYTH
    ------------------------------



  • 14.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Fri March 15, 2024 05:46 AM

    Thank you I will try that



    ------------------------------
    Jeremy Smith
    ------------------------------



  • 15.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Fri March 15, 2024 09:09 AM

    @Ben

    6MB fits largely in 4096MB.

    On macos, running a client ascp, takes only 12MB

    Note that there is also asperatransferd

    @Jeremy

    Basically, it works like this: your program somehow starts the asperatransferd process, then communicates with it with gRPC, and asperatransferd starts an ascp process for each transfer session.

    it seems that the ascp process fails (killed)

    you might try to set debug level of "ascp" to 2 or 1 to get more details maybe.

    (don't leave debug level in production, as it can affect performance)

    in config of asperatransferd: fasp_runtime.log.level

    log file will be aspera-scp-transfer.log

    located in folder specified by config fasp_runtime.log.dir



    ------------------------------
    Laurent Martin
    ------------------------------



  • 16.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Fri March 15, 2024 09:44 AM

    Hello - thank you. I did try increasing memory but it's still happening. It's a weird one and I think it's to do with the way the lambda Node js runtime is handling the child processes.

    I have two events arriving from SQS. Each one starts a Lambda function from a container. I have set it to run in sequence, so not simultaneous execution (in case the Aspera server doesn't like multiple logins)

    The first one goes through fine.

    In the second one the daemon is killed - it appears - by the kill signal from the first execution even though it's a separate instance.

    So I think this is probably down to my code not cleaning up after itself somehow.



    ------------------------------
    Jeremy Smith
    ------------------------------



  • 17.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Fri March 15, 2024 09:53 AM

    OK - quick update - I have fixed my code and it now seems to work with back-to-back invocations & kills of the SDK in separate Lambdas.



    ------------------------------
    Jeremy Smith
    ------------------------------



  • 18.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Fri March 15, 2024 12:18 PM

    Thanks for the update.

    Effectively, currently "kill -INT" is the "clean" way to stop the aspera transfer daemon (SDK)... 



    ------------------------------
    Laurent Martin
    ------------------------------



  • 19.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Thu April 04, 2024 10:22 AM
      |   view attached

    This Post was a great help to me. I was able to download "Transfer Settings" using

    https://Aspera.Share.url/node_api/files/upload_setup.

    But unable to upload the file. I am using C# Windows application for uploading a file.

    I have uploaded the transfer Specification (Json format) with this post. I am using the below code to transfer file. I am running asperatransferd.exe.
    All the setting are correct.

    I am getting below error when I try to transfer at the above highlighted code. Please help me with overcoming this issue.

    Grpc.Core.RpcException: 'Status(StatusCode="Internal", Detail="Error starting gRPC call. HttpRequestException: The SSL connection could not be established, see inner exception. AuthenticationException: Cannot determine the frame size or a corrupted frame was received.", DebugException="System.Net.Http.HttpRequestException: The SSL connection could not be established, see inner exception.")'



    ------------------------------
    Kuldeep P
    ------------------------------

    Attachment(s)

    json
    json2.json   1 KB 1 version


  • 20.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Thu April 04, 2024 01:43 PM

    Hi Kuldeep,

    The error suggests that the C# code is trying to open a TLS connection to the asperatransferd GRPC server and the latter may not be configured for TLS. I think you can either:

    • Configure your client to use insecure mode, so it doesn't use TLS. In the C# samples this is called before creating the client for example:

    AppContext.SetSwitch("System.Net.Http.SocketsHttpHandler.Http2UnencryptedSupport", true);

    var client = new TransferService.TransferServiceClient(GrpcChannel.ForAddress("http://localhost:55002"));

    • Setup the transfer SDK to accept TLS connections in the asperatransferd.json config file adding a section like the following 

    "tls": {
        "enabled": true,
        "certificate": "/path/to/cert.pem",
        "key": "/path/to/key.pem"

    }

    You'd need to check whether the C# GRPC library you are using will require you to validate certificates and whether you'd need to get certificates signed by a Certificate Authority and issued for the hostname you use to connect.



    ------------------------------
    Jose Gomez
    ------------------------------



  • 21.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Thu April 11, 2024 09:13 AM
    Hello,
     
    I'm currently working on a similar scenario where we need to upload files to an Aspera server for our client and I don't have control over that destination. Unlike the original poster, our challenge involves uploading large files from S3.
     
    We operate a NodeJS backend and I've conducted tests with the Aspera Transfer SDK within it. However, our network blocks the UDP transfers (the initial TCP connection to the server works fine after whitelisting the Host). Unfortunately, it seems that we cannot configure UDP with Istio in our architecture.
     
    Therefore, the idea arose to delegate the uploading task to an AWS Lambda function. I aim to read a large S3 object as a stream so that we don't need to download the entire file to temporary space and can upload chunks to the Aspera Server. However, based on the documentation, I'm unsure if this is achievable with the ascp/ascp4 CLI. So my question here is whether it's possible to set up Multi-Part-Upload using the Aspera-CLI (ascp) and manually upload partial files, one after another, that can be completed with the final part?


    ------------------------------
    Riva Saringer
    ------------------------------



  • 22.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Thu April 11, 2024 07:42 PM

    The Aspera Server will only support file transfer through using the ascp program. The Transfer SDK supports a mode called HTTPFallback

    https://developer.ibm.com/apis/catalog/aspera--aspera-transfer-sdk/API%20Reference#transfersdk.HTTPFallback

    It will attempt the UDP protocol first and then fall-back to  HTTP to communicate to the Aspera Server. The Aspera server must have HTTP Fallback enabled for this to work.

    The ascp command relies on the UDP protocol or HTTP Fallback to communicate to an Aspera Server and there is not an option to pass multi-part uploads like you can with the S3 API.



    ------------------------------
    BEN FORSYTH
    ------------------------------



  • 23.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Wed April 24, 2024 03:46 AM
    To streamline our file transfers, we've integrated the Aspera Transfer SDK into an AWS Lambda function. This setup allows us to upload data as a stream instead of downloading entire files, optimizing our process. However, we've encountered a bottleneck in upload speed due to GRPC's 4MB message limitation.
     
    Despite reviewing the relevant GitHub repository by Laurent Martin, we couldn't find a suitable example for increasing the message size. We've experimented with adjusting the GRPC configuration settings as follows:
     
    const aspera_sdk_client = new transfersdk.TransferService(
        "127.0.0.1:55002",
        grpc.credentials.createInsecure(),
        {
            "grpc.max_receive_message_length": (parseInt(process.env.S3_CHUNK_SIZE || "0", 10) || 100 * 1024 * 1024) + 102400,
            "grpc.max_send_message_length": (parseInt(process.env.S3_CHUNK_SIZE || "0", 10) || 100 * 1024 * 1024) + 102400,
        }
    );
     
    Despite these adjustments, we continue to encounter errors when attempting to exceed the 4MB message size limit. If you have any insights or suggestions on how to overcome this limitation and optimize our upload speeds, we would greatly appreciate your expertise.


    ------------------------------
    Riva Saringer
    ------------------------------



  • 24.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted 18 days ago

    Hi Riva, we are facing the same issue. Did you end up figuring out a solution for this? 



    ------------------------------
    Jerlyn Manohar
    ------------------------------



  • 25.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted 12 days ago
    Edited by Riva Saringer 12 days ago

    Hi Jerlyn,

    Unfortunately, no. However, the upload speed was still sufficient for us. When sending files to an Aspera server with fast bandwidth, we managed to upload about 500MB in 30 seconds from AWS.

    Reading in a S3 stream with 4MB:

    javascript
    // Instantiate the S3ReadStream const options = { s3: s3Client, command: new GetObjectCommand({ Bucket: bucket, Key: fileKey, }), maxLength: fileSize, byteRange: parseInt(process.env.S3_CHUNK_SIZE || "0", 10) || 4 * 1024 * 1024 - 10 * 1024, // TODO: ASPERA TRANSFER SDK GRPC MESSAGE LIMIT IS 4MB (4194304). Actual value needs to be slightly smaller... }

    For another customer with very slow upload bandwidth, our Lambda function times out before finishing the upload. We did not manage to continue the upload with the Aspera Transfer SDK for JS. Therefore, we set up the upload service on a Fargate task, which does not have a time limit.



    ------------------------------
    Riva Saringer
    ------------------------------



  • 26.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Tue April 23, 2024 06:39 AM

    Hi Jose Gomez and Everyone,

    "tls": {
        "enabled": true,
        "certificate": "/path/to/cert.pem",
        "key": "/path/to/key.pem"

    }

    I used the Setting below, I am able to use https protocol and upload the file to IBM Aspera share from my C# window application.

    But the certificate and Key file provided in SDK are not working. I am bypassing the authenticity of the certificate and key and accepting the risk by using below code. I also used the Certificate provided in IBM Aspera Connect software installed location, that also didn't work. Is there is way to get Authentic Certificate and public Key files for Https protocol Authentication which I can use with Aspera Transfer SDK.

    Thanks in advance,

    Kuldeep P



    ------------------------------
    Kuldeep P
    ------------------------------



  • 27.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Tue April 23, 2024 11:24 AM

    > But the certificate and Key file provided in SDK are not working.

    ...

    > Is there is way to get Authentic Certificate and public Key files for Https protocol Authentication which I can use with Aspera Transfer SDK.

    These certificates (aspera/etc/aspera_server_*.pem) are self-signed certificates that can be used for testing. You can also create your own self-signed certificate or use a Certificate Authority to issue signed certificates for your use. One thing to note if you want to validate the certificates authenticity is that they'll need to be issued for the hostname or IP address you use in your program to connect (in your example 127.0.0.1) and either the certificate itself (if self-signed) or the root certificate for the Certificate Authority need to be in the Certificate trust store used by your C# windows application. I'd imagine in your environment it would be the operating system certificate store.



    ------------------------------
    Jose Gomez
    ------------------------------



  • 28.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Tue April 23, 2024 06:57 AM

    Hi Everyone,

    How to achieve Resuming and Overwriting functionality using Aspera Transfer SDK code. Can some one help me with this please.



    ------------------------------
    Kuldeep P
    ------------------------------



  • 29.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Tue April 23, 2024 11:32 AM

    You can specify ovewrite and resume policies in your transfer spec. See the possible options and a description for each in the documentation for the 'overwrite' and 'resume_policy'  at https://developer.ibm.com/apis/catalog/aspera--aspera-transfer-sdk/API%20Reference#transfersdk.Filesystem

    {

           "session_initiation" : { ... }

          ...

      "file_system": {

          "overwrite" : "diff",

          "resume_policy" : "sparse_checksum"

          }

         "assets" : { ... }

      }



    ------------------------------
    Jose Gomez
    ------------------------------



  • 30.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Thu May 02, 2024 02:54 AM
    Edited by Kuldeep P Thu May 02, 2024 02:57 AM

    How to create a new folder with a name inside a specified share ?

    Hi Jose Gomez

    Thanks for the 'overwrite' and 'resume_policy'  suggestion, it helped me. I am using V1 transfer specification, and some way I was able to add 'overwrite' and 'resume_policy' settings to the V1 transfer specification and I was able to resume transfer. Thanks for your inputs, It helped me to focus on the required settings.

    Next, I am trying to create a New Folder inside a Share Folder. I mean, I am Providing a destination root ("share" folder path), and then I need to provide a local directory path for upload in Transfer Specification. I am able to transfer all the file in the local directory path to Aspera share. But I need to create a folder and rename it to the required name (Same as local directory name). Can you please help me with this, how to create a folder with a name inside a specified share using the SDK and API code.

    I need to achieve below 2 functionalities in code. Jose Gomez or any one from the community can help me with this I will be great. Thanks in advance.



    ------------------------------
    Kuldeep P
    ------------------------------



  • 31.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Thu May 02, 2024 01:22 PM

    Hi,

    Glad the suggestion helped. To create new folders or rename outside of a transfer you can use the Node API: https://developer.ibm.com/apis/catalog/aspera--aspera-node-api/Introduction

    See for instance the /files/{id} endpoint if you use Access Keys to authenticate and configure the storage root for your transfers, or /files/create, /files/rename endpoint if you use absolute docroots.

    If you want to do the directory creation or renaming within the scope of a transfer then you can also specify your paths using a pairs of source and destination, so the transfer will get the contents from the source path and store them in the path specified in the destination path, so you can have a different directory organization and naming scheme.



    ------------------------------
    Jose Gomez
    ------------------------------



  • 32.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Thu May 09, 2024 11:24 AM

    Hi Jose Gomez,

    I am able to create new folder in the Aspera Share. Thank you so much for your inputs.

    Now next 2 challenges what is am facing is

    1. I am unable to stop the Trasnfer once the Transfer is started.
    2. I need to List of folder and files names (Only the name list) inside a Share in Aspera Share. 

    1. I am unable to stop the Trasnfer once the Transfer is started.

             We have a requirement where we should be able to stop transfer in between when the upload is in progress. I am using the common class variable/instance "asperaTransferClient" to startTransfer and stop transfer, but even after calling Stop transfer in between the complete file is getting uploaded. Can you please help me with stopping the transfer in between of upload file.

    Class Variable

    TransferService.TransferServiceClient asperaTransferClient

    Code to start transfer

    asperaTransferClient = new TransferService.TransferServiceClient(GrpcChannel.ForAddress("https://127.0.0.1:55002");
    asperaStartTransferResponse = asperaTransferClient.StartTransferWithMonitor(transferRequest);
    Code to stop transfer

    asperaTransferClient.StopTransfer(new StopTransferRequest())

    2. I need a list of files and folder names inside the Share in Aspera Share. Can you please let me know which is the API which can be used to the Get the same.

    Thanks in advance,

    Kuldeep P



    ------------------------------
    Kuldeep P
    ------------------------------



  • 33.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Sat May 11, 2024 06:01 PM

    You can use files/page to list directory contents

    https://developer.ibm.com/apis/catalog/aspera--aspera-node-api/api/API--aspera--ibm-aspera-node-api#post1907190647

    To stop the transfer you need to pass the transfer id that is returned from your call to StartTransferWithMonitor. It should be available getting the transferId() from your asperaStartTransferResponse



    ------------------------------
    BEN FORSYTH
    ------------------------------



  • 34.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Thu May 23, 2024 05:27 AM
    Edited by Kuldeep P Thu May 23, 2024 05:28 AM

    Hi BEN FORSYTH,

    Thanks for your input. I am able to browse the files and directory in aspera share.

    1. Regarding "To stop the transfer you need to pass the transfer id" , where I need to pass the Transfer id ? Which method I need to call?

        I called StopTransfer Method and tried to assign TransferId, but it is showing TransferId is read only.

    2. I am trying to get checksum value of a file which is already uploaded in IBM Aspera Share.

        I am able to get checksum while uploading the file by mentioning "checksum_type = md5" in Transfer specification.

        But I need checksum of an existing file in IBM Share with out downloading the file.

        Is there is an API to get the checksum value to make sure already uploaded file is not corrupt ?

        https://emea.aspera-qa.com/node_api/files/info

        I tried above API, it showed all the details of file, but not the checksum value.

        Please help me with the file checksum validation.

    Thanks and Regards,

    Kuldeep P



    ------------------------------
    Kuldeep P
    ------------------------------



  • 35.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Fri May 24, 2024 07:15 PM

    You must generate a StopTransferRequest

    https://developer.ibm.com/apis/catalog/aspera--aspera-transfer-sdk/API%20Reference/#transfersdk.StopTransferRequest

    Then from your StopTransferRequest you can call 

    client.StopTransfer(StopTransferRequest)

    We don't have an API to retrieve the MD5 sum of a source file. The checksum of the file is computed in-flight by the sender as it is being read and transferred. Our transfer encryption makes sure that every byte sent matches what is received by the destination.



    ------------------------------
    BEN FORSYTH
    ------------------------------



  • 36.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Tue May 28, 2024 01:27 PM

    Hi Everyone,

    I have a question related to the Aspera Transfer SDK and managing transfers. Specifically, instead of stopping a transfer, I would like to pause it and then resume it later.

    According to the API reference, there is a status "PAUSED" available. However, I'm unsure how to trigger a running transfer to be paused. I assume that calling stopTransfer does not actually pause the transfer, correct? Once the transfer is paused, I would need to call startTransfer again with the appropriate resume_policy in the transfer request to continue the transfer.

    My use case involves uploading from a stream, but the upload might need to be paused because it's running on AWS Lambda, which has timeouts for large files. An example (preferably in JavaScript) in Laurent-Martin's GitHub repository demonstrating this feature would be greatly appreciated :P 

    Thank you!



    ------------------------------
    Riva Saringer
    ------------------------------



  • 37.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted Wed May 29, 2024 04:18 AM
    Edited by Laurent Martin Wed May 29, 2024 04:19 AM

    In general pausing a transfer with Aspera means setting its speed to zero.

    And resuming, setting the speed to not zero.

    So, you can try calling ModifyTransfer with a transfer spec with target_rate_kbps set to zero.

    I have not tested, though.

    Also, I don't think it changes the state to "PAUSED".



    ------------------------------
    Laurent Martin
    ------------------------------



  • 38.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted 27 days ago

    My issue here is that I would to resume a transfer from another function call, intending to handle large file uploads via multiple AWS Lambda invocations. 

    I have tried various settings related to transfer resumption, but none have been successful. 

    Can anyone confirm if the Aspera SDK's resume functionality is limited to the same client connection? If so, are there any recommended approaches or workarounds to resume transfers across different client connections or functions?



    ------------------------------
    Riva Saringer
    ------------------------------



  • 39.  RE: Uploading to a third party Aspera Shares or Aspera Enterprise location from AWS Lambda

    Posted 27 days ago
    Edited by Laurent Martin 27 days ago

    Ok, so it's not pausing and then resuming (with session still running)

    It's more starting a transfer, then interrupting (stop before completion), then resume the transfer in a new transfer session:

    • not re-transfer already completed files (unless they have changed on source)
    • continue half-transmitted files (unless they have changed on source)

    This is controlled in the transfer spec with parameter: resume_policy , values are none, attrs, sparse_csum, full_csum

    none: Always transfer the entire file.

    attrs:  Check file attributes (size and modification time). Resume if they match.

    sparse_csum Check file attributes and sparse checksum.  Resume if they match.

    full_csum Check file attributes and full checksum.  Resume if they match.

    The new session will resume partial files if policy is not none.

    Sparse checksum is a checsum of the source file on 10Mb (in several slices) or less independently of the file size: it is the advantage of being much faster than full checksum.

    ------------------------------
    Laurent Martin
    ------------------------------