aws-sdk-java-v2-s3

giuseppe-trisciuoglio/developer-kit · updated Apr 8, 2026

$npx skills add https://github.com/giuseppe-trisciuoglio/developer-kit --skill aws-sdk-java-v2-s3
0 commentsdiscussion
summary

S3 object storage patterns and operations using AWS SDK for Java 2.x.

  • Covers bucket creation, object uploads/downloads, multipart transfers, presigned URLs, and S3 Transfer Manager for optimized file handling
  • Includes synchronous and asynchronous client setup with configurable retry logic, timeouts, and connection pooling
  • Provides Spring Boot integration with configuration classes, service layer patterns, and async/reactive workflows
  • Supports advanced operations: metadata and encr
skill.md

AWS SDK for Java 2.x - Amazon S3

Overview

Provides patterns for S3 operations: bucket management, object upload/download with multipart support, presigned URLs, S3 Transfer Manager, and S3-specific configurations using AWS SDK for Java 2.x.

When to Use

  • Creating, listing, or deleting S3 buckets with proper configuration
  • Uploading or downloading objects from S3 with metadata and encryption
  • Working with multipart uploads for large files (>100MB) with error handling
  • Generating presigned URLs for temporary access to S3 objects
  • Copying or moving objects between S3 buckets with metadata preservation
  • Setting object metadata, storage classes, and access controls
  • Implementing S3 Transfer Manager for optimized file transfers
  • Integrating S3 with Spring Boot applications for cloud storage

Quick Reference

Operation Method Notes
Create bucket createBucket() Wait with waiter().waitUntilBucketExists()
Upload object putObject() Use RequestBody.fromFile()
Download object getObject() Streams to file or memory
Delete objects deleteObjects() Batch up to 1000 keys
Presigned URL presigner.presignGetObject() Max 7 days expiration

Storage Classes

Class Use Case
STANDARD Frequently accessed data
STANDARD_IA Infrequently accessed data
GLACIER Long-term archive
INTELLIGENT_TIERING Automatic cost optimization

Instructions

1. Add Dependencies

<dependency>
    <groupId>software.amazon.awssdk</groupId>
    <artifactId>s3</artifactId>
    <version>2.20.0</version>
</dependency>

<dependency>
    <groupId>software.amazon.awssdk</groupId>
    <artifactId>s3-transfer-manager</artifactId>
    <version>2.20.0</version>
</dependency>

2. Create S3 Client

S3Client s3Client = S3Client.builder()
    .region(Region.US_EAST_1)
    .build();

// With retry logic
S3Client s3Client = S3Client.builder()
    .region(Region.US_EAST_1)
    .overrideConfiguration(b -> b
        .retryPolicy(RetryPolicy.builder()
            .numRetries(3)
            .build()))
    .build();

3. Create Bucket

CreateBucketRequest request = CreateBucketRequest.builder()
    .bucket(bucketName)
    .build();

s3Client.createBucket(request);

// Wait until ready
s3Client.waiter().waitUntilBucketExists(
    HeadBucketRequest.builder().bucket(bucketName).build()
);

4. Upload Object

PutObjectRequest request = PutObjectRequest.builder()
    .bucket(bucketName)
    .key(key)
    .contentType("application/pdf")
    .serverSideEncryption(ServerSideEncryption.AES256)
    .storageClass(StorageClass.STANDARD_IA)
    .build();

s3Client.putObject(request, RequestBody.fromFile(Paths.get(filePath)));

// Validate upload completion
HeadObjectResponse headResp = s3Client.headObject(HeadObjectRequest.builder()
    .bucket(bucketName)
    .key(key)
    .build());

5. Download Object

GetObjectRequest request = GetObjectRequest.builder()
    .bucket(bucketName)
    .key(key)
    .build();

s3Client.getObject(request, Paths.get(destPath));

6. Generate Presigned URL

try (S3Presigner presigner = S3Presigner.create()) {
    GetObjectRequest getRequest = GetObjectRequest.builder()
        .bucket(bucketName)
        .key(key)
        .build();

    GetObjectPresignRequest presignRequest = GetObjectPresignRequest.builder()
        .signatureDuration(Duration.ofMinutes(10))
        .getObjectRequest(getRequest)
        .build();

    String url = presigner.presignGetObject(presignRequest).url().toString();
}

7. Use Transfer Manager (Large Files)

try (S3TransferManager tm = S3TransferManager.create()) {
    UploadFileRequest request = UploadFileRequest.builder()
        .putObjectRequest(req -> req.bucket(bucketName).key(key))
        .source(Paths.get(filePath))
        .build();

    FileUpload upload = tm.uploadFile(request);
    CompletedFileUpload result = upload.completionFuture().join();
}

Best Practices

Performance

  • Use S3 Transfer Manager: Automatic multipart uploads for files >100MB
  • Reuse S3 Client: Clients are thread-safe; reuse throughout application
  • Enable async operations: Use S3AsyncClient for I/O-bound operations
  • Configure timeouts: Set appropriate timeouts for large file operations

Security

  • Use temporary credentials: IAM roles or AWS STS for short-lived tokens
  • Enable encryption: Use AES-256 or AWS KMS for sensitive data
  • Use presigned URLs: Avoid exposing credentials with temporary access
  • Validate metadata: Sanitize user-provided metadata

Error Handling

  • Implement retry logic: Exponential backoff for network operations
  • Handle throttling: Proper handling of 429 responses
  • Clean up failures: Abort failed multipart uploads

Cost Optimization

  • Use appropriate storage classes: STANDARD, STANDARD_IA, INTELLIGENT_TIERING
  • Implement lifecycle policies: Automatic transition/expiration
  • Minimize API calls: Use batch operations when possible

Constraints and Warnings

  • Object Size: Single PUT limited to 5GB; use multipart for larger files
  • Bucket Names: Must be globally unique across all AWS accounts
  • Object Immutability: Objects cannot be modified; must be replaced entirely
  • Eventual Consistency: List operations may have slight delays after uploads
  • Presigned URLs: Maximum expiration time is 7 days
  • Multipart Uploads: Parts must be at least 5MB except last part

Examples

Complete Upload Workflow with Validation

// 1. Upload with validation
PutObjectRequest putRequest = PutObjectRequest.builder()
    .bucket(bucketName)
    .key(key)
    .contentType(contentType)
    .build();

s3Client.putObject(putRequest, RequestBody.fromFile(Paths.get(filePath)));

// 2. Validate with headObject
HeadObjectResponse headResp = s3Client.headObject(HeadObjectRequest.builder()
    .bucket(bucketName)
    .key(key)
    .build());

// 3. Verify metadata
long fileSize = Files.size(Paths.get(filePath));
if (headResp.contentLength() != fileSize) {
    throw new IllegalStateException("Upload size mismatch");
}

Multipart Upload with Abort-on-Failure

// 1. Initiate multipart upload
CreateMultipartUploadRequest createRequest = CreateMultipartUploadRequest.builder()
    .bucket(bucketName)
    .key(key)
    .build();

CreateMultipartUploadResponse multipartUpload = s3Client.createMultipartUpload(createRequest);
String uploadId = multipartUpload.uploadId();

try {
    // 2. Upload parts
    List<CompletedPart> parts = new ArrayList<>();
    int partNumber = 1;
    byte[] fileBytes = Files.readAllBytes(Paths.get(filePath));
    int chunkSize = 5 * 1024 * 1024; // 5MB minimum

    for (int offset = 0; offset < fileBytes.length; offset += chunkSize) {
        int length = Math.min(chunkSize, fileBytes.length - offset);
        UploadPartRequest uploadPartRequest = UploadPartRequest.builder()
            .bucket(bucketName)
            .key(key)
            .uploadId(uploadId)
            .partNumber(partNumber)
            .build();

        UploadPartResponse partResponse = s3Client.uploadPart(uploadPartRequest,
            RequestBody.fromBytes(Arrays.copyOfRange(fileBytes, offset, offset + length)));

        parts.add(CompletedPart.builder()
            .partNumber(partNumber)
            .eTag(partResponse.eTag())
            .build());
        partNumber++;
    }

    // 3. Complete multipart upload
    CompleteMultipartUploadRequest completeRequest = CompleteMultipartUploadRequest.builder()
        .bucket(bucketName)
        .key(key)
        .uploadId(uploadId)
        .multipartUpload(CompletedMultipartUpload.builder().parts(parts).build())
        .build();
    s3Client.completeMultipartUpload(completeRequest);

} catch (Exception e) {
    // 4. Abort on failure
    AbortMultipartUploadRequest abortRequest = AbortMultipartUploadRequest.builder()
        .bucket(bucketName)
        .key(key)
        .uploadId(uploadId)
        .build();
    s3Client.abortMultipartUpload(abortRequest);
    throw new RuntimeException("Upload failed, cleanup performed", e);
}

References

Related Skills

  • aws-sdk-java-v2-core - Core AWS SDK patterns and configuration
  • spring-boot-dependency-injection - Spring dependency injection patterns

Discussion

Product Hunt–style comments (not star reviews)
  • No comments yet — start the thread.
general reviews

Ratings

4.632 reviews
  • Omar Nasser· Dec 20, 2024

    Registry listing for aws-sdk-java-v2-s3 matched our evaluation — installs cleanly and behaves as described in the markdown.

  • Isabella Iyer· Dec 12, 2024

    Useful defaults in aws-sdk-java-v2-s3 — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.

  • Chaitanya Patil· Dec 8, 2024

    aws-sdk-java-v2-s3 has been reliable in day-to-day use. Documentation quality is above average for community skills.

  • Mia Sethi· Dec 4, 2024

    We added aws-sdk-java-v2-s3 from the explainx registry; install was straightforward and the SKILL.md answered most questions upfront.

  • Piyush G· Nov 27, 2024

    aws-sdk-java-v2-s3 reduced setup friction for our internal harness; good balance of opinion and flexibility.

  • Shikha Mishra· Oct 18, 2024

    We added aws-sdk-java-v2-s3 from the explainx registry; install was straightforward and the SKILL.md answered most questions upfront.

  • Michael Rao· Sep 1, 2024

    aws-sdk-java-v2-s3 is among the better-maintained entries we tried; worth keeping pinned for repeat workflows.

  • Benjamin Zhang· Aug 20, 2024

    Solid pick for teams standardizing on skills: aws-sdk-java-v2-s3 is focused, and the summary matches what you get after install.

  • Henry Sharma· Jul 23, 2024

    aws-sdk-java-v2-s3 fits our agent workflows well — practical, well scoped, and easy to wire into existing repos.

  • Rahul Santra· Jul 15, 2024

    Useful defaults in aws-sdk-java-v2-s3 — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.

showing 1-10 of 32

1 / 4