An n8n community node for AWS Bedrock with AssumeRole authentication support.
- 🔐 AssumeRole Authentication: Secure cross-account access using AWS STS AssumeRole
- 🤖 Multiple Claude Models: Support for Claude 3.5 Sonnet, Claude 3 Opus, Sonnet, Haiku, and more
- 🎨 Image Generation: Support for Amazon Nova Canvas and Titan Image Generator models
- 🤝 AI Agent Compatible: Includes Chat Model sub-node for use with n8n AI Agent
- ⚡ Credential Caching: Automatic caching of temporary credentials with expiration handling
- 🛡️ Error Handling: Comprehensive error handling and logging
- 🔄 Batch Processing: Process multiple items in a single workflow execution
- 📊 Usage Tracking: Detailed usage information and response metadata
This package includes two nodes:
- AWS Bedrock (AssumeRole) - Standalone node for direct AWS Bedrock API calls
- AWS Bedrock Chat Model - Chat Model sub-node for use with n8n AI Agent
This node uses AWS Bedrock inference profiles for optimal performance and availability:
- Claude 3.5 Sonnet v2 -
us.anthropic.claude-3-5-sonnet-20241022-v2:0(default) - Claude 3.5 Sonnet v1 -
us.anthropic.claude-3-5-sonnet-20240620-v1:0 - Claude 3.5 Haiku -
us.anthropic.claude-3-5-haiku-20241022-v1:0 - Claude 3.7 Sonnet -
us.anthropic.claude-3-7-sonnet-20250219-v1:0 - Claude Sonnet 4 -
us.anthropic.claude-sonnet-4-20250514-v1:0 - Claude Sonnet 4.5 -
us.anthropic.claude-sonnet-4-5-20250929-v1:0 - Claude Haiku 4.5 -
us.anthropic.claude-haiku-4-5-20251001-v1:0 - Claude Opus 4 -
us.anthropic.claude-opus-4-20250514-v1:0 - Claude Opus 4.1 -
us.anthropic.claude-opus-4-1-20250805-v1:0
- Amazon Nova Canvas v1 -
amazon.nova-canvas-v1:0- State-of-the-art image generation - Amazon Titan Image Generator v2 -
amazon.titan-image-generator-v2:0- High-quality image generation with advanced controls
# Install globally for n8n
npm install -g n8n-nodes-aws-bedrock-assumerole
# Or install locally in your n8n custom nodes directory
cd ~/.n8n/custom/
npm install n8n-nodes-aws-bedrock-assumerole# Clone the repository
git clone https://github.com/cabify/n8n-nodes-aws-bedrock-assumerole.git
cd n8n-nodes-aws-bedrock-assumerole
# Install dependencies
npm install
# Build the project
npm run build
# Link for local development
npm link
# In your n8n installation directory
npm link n8n-nodes-aws-bedrock-assumeroleYou have two options for providing AWS credentials:
Set these environment variables on your n8n server:
export AWS_ACCESS_KEY_ID="your-access-key-id"
export AWS_SECRET_ACCESS_KEY="your-secret-access-key"
export AWS_REGION="us-east-1"Fill in the credential fields directly in the n8n UI (less secure).
- Go to Credentials in your n8n instance
- Click Add Credential
- Search for "AWS Assume Role"
- Configure the following:
- Access Key ID: Leave empty to use environment variable (recommended)
- Secret Access Key: Leave empty to use environment variable (recommended)
- Role ARN to Assume:
arn:aws:iam::<account-id>:role/<role-name> - AWS Region:
us-east-1(or your preferred region) - Session Duration:
3600(1 hour, adjust as needed)
The base AWS credentials need the following permission:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::<target-account-id>:role/<target-role-name>"
}
]
}The role to be assumed needs:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"bedrock:InvokeModel"
],
"Resource": [
"arn:aws:bedrock:*::foundation-model/anthropic.*"
]
}
]
}And the trust relationship:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<base-account-id>:role/<base-role-name>"
},
"Action": "sts:AssumeRole"
}
]
}This node supports AWS Bedrock Application Inference Profiles, allowing you to route traffic through specific profiles for cost and usage tracking.
In the AWS AssumeRole credential, you can optionally configure:
- Application Inference Profile Account ID: The AWS account ID where your application inference profiles live.
- Application Inference Profiles JSON: A JSON object mapping Bedrock model IDs to application inference profile IDs.
Example JSON:
{
"us.anthropic.claude-3-5-sonnet-20240620-v1:0": "hs4uvikaus5b",
"us.anthropic.claude-3-5-sonnet-20241022-v2:0": "0xumpou8xusv",
"us.anthropic.claude-3-5-haiku-20241022-v1:0": "abc123haiku"
}- The key is the Bedrock model ID (for example,
us.anthropic.claude-3-5-sonnet-20241022-v2:0). - The value is the application inference profile ID (for example,
0xumpou8xusv), not the full ARN.
The node then builds the final ARN internally using:
arn:aws:bedrock:{region}:{account-id}:application-inference-profile/{profile-id}
If the JSON is invalid, the node will fail with a clear error message pointing to the Application Inference Profiles JSON field.
The Model ID dropdown in the node behaves as follows:
- If Application Inference Profiles JSON is empty or not set:
- The dropdown shows all supported Claude models (the default static list).
- If Application Inference Profiles JSON is present and valid:
- The dropdown shows only the models present in that JSON.
- Known model IDs are displayed with friendly names (for example, "Claude 3.5 Sonnet v2"), unknown ones are shown as their raw model ID.
This ensures that, when you configure specific models and profiles in the credential, users of the node can only select those models.
If no application inference profile mapping is found for a selected model ID, the node will:
- Try the legacy single Application Inference Profile ID field (if configured).
- Otherwise, fall back to using the raw model ID directly (original behaviour).
The AWS Bedrock Chat Model node is designed to work with n8n's AI Agent node, enabling conversational AI workflows with tool calling, memory, and more.
- Add an AI Agent node to your workflow
- Connect the AWS Bedrock Chat Model node to the "Chat Model" input of the AI Agent
- Select your credential in the Chat Model node (the same AWS AssumeRole credential)
- Choose your model (e.g., Claude 3.5 Sonnet v2)
- Add tools (optional): Connect tool nodes like Vector Store, Calculator, HTTP Request, etc.
- Add memory (optional): Connect a memory node for conversation history
- ✅ Tool Calling: The AI can use tools to fetch data, perform calculations, etc.
- ✅ Conversation Memory: Maintain context across multiple interactions
- ✅ Structured Output: Parse responses into structured data
- ✅ Multi-step Reasoning: The agent can plan and execute complex tasks
For simple, direct API calls without AI Agent features, use the AWS Bedrock (AssumeRole) node.
- Add the AWS Bedrock (AssumeRole) node to your workflow
- Select your credential (created in step 2 above)
- Configure the node:
- Model ID: Choose from the dropdown (e.g., Claude 3.5 Sonnet)
- Prompt: Enter your prompt or use an expression to get it from previous nodes
- Max Tokens: Set the maximum response length (default: 1000)
- Temperature: Control randomness (0.0 = deterministic, 1.0 = very random)
Analyze the following customer feedback and provide:
1. Sentiment (positive/negative/neutral)
2. Key themes
3. Suggested actions
Customer feedback: "The service was okay but the wait time was too long."
Note: Image analysis is currently only available with the standalone AWS Bedrock (AssumeRole) node, not with the Chat Model sub-node.
To analyze an image together with a text prompt using Claude models that support vision capabilities:
- Add a Form Trigger (or any node that outputs binary data) with a file field, for example labeled
image_to_analize. - Connect that node to AWS Bedrock (AssumeRole).
- Configure the Bedrock node:
- Model ID: Select any Claude model that supports image input (for example, Claude Sonnet 4).
- Input Type: Set to
Text and Image. - Image Binary Property: Set to the name of the binary field that contains the uploaded image. For a Form Trigger file field labeled
image_to_analize, the binary key is alsoimage_to_analize. - Prompt: Provide the instruction you want to send together with the image, for example:
Describe what is written in this image.
- Execute the workflow by submitting the form with an image file.
You can import the ready-to-use example workflow from examples/image-analysis-workflow.json.
Generate images from text prompts using Amazon Nova Canvas or Titan Image Generator models:
-
Add the AWS Bedrock (AssumeRole) node to your workflow.
-
Configure the Bedrock node:
- Model ID: Select
Amazon Nova Canvas v1orAmazon Titan Image Generator v2. - Prompt: Describe the image you want to generate (e.g., "A futuristic city at sunset with flying cars").
- Negative Prompt (optional): Describe what NOT to include (e.g., "blurry, low quality, text").
- Image Width/Height: Choose the dimensions (512, 768, 1024, or 1280 pixels).
- Image Quality: Select
standardorpremium. - Number of Images: Generate 1-4 images at once.
- Seed (optional): Set a specific seed for reproducible results (0 = random).
- CFG Scale (Titan Image only): Controls how closely the image follows the prompt (1-15).
- Model ID: Select
-
The node outputs binary image data that can be:
- Saved to disk using the Write Binary File node
- Uploaded to cloud storage (S3, Google Drive, etc.)
- Sent via email or messaging platforms
- Further processed in your workflow
For image generation models, the node returns:
{
"modelId": "arn:aws:bedrock:us-east-1:123456789:application-inference-profile/abc123",
"configuredModelId": "amazon.nova-canvas-v1:0",
"prompt": "A futuristic city at sunset",
"imageIndex": 0,
"totalImages": 1,
"imageWidth": 1024,
"imageHeight": 1024,
"imageQuality": "standard",
"timestamp": "2026-01-08T10:00:00.000Z"
}The generated image is available in the binary.data property as a PNG file.
Both Nova Canvas and Titan Image Generator support advanced image editing capabilities:
| Task Type | Description | Required Fields |
|---|---|---|
| Text to Image | Generate a new image from a text prompt | Prompt |
| Inpainting | Modify areas inside a masked region | Source Image, Mask (prompt or image), Prompt |
| Outpainting | Extend or modify areas outside a masked region | Source Image, Mask (prompt or image), Prompt |
| Image Variation | Create variations of an existing image | Source Image, Prompt (optional) |
| Background Removal | Remove the background (outputs transparent PNG) | Source Image |
Replace part of an image based on a text description of the area to modify:
- Add a node that provides an image (e.g., Read Binary File, HTTP Request, or Form Trigger).
- Add the AWS Bedrock (AssumeRole) node.
- Configure:
- Model ID: Select
Amazon Nova Canvas v1orAmazon Titan Image Generator v2 - Image Task Type: Select
Inpainting (Edit Inside Mask) - Source Image Binary Property:
data(or the name of your binary property) - Mask Prompt: Describe the area to modify (e.g., "the sky", "the person's shirt")
- Prompt: Describe what to put in that area (e.g., "a beautiful sunset sky")
- Negative Prompt (optional): What to avoid
- Model ID: Select
Extend an image beyond its original boundaries:
- Provide a source image.
- Configure:
- Image Task Type: Select
Outpainting (Edit Outside Mask) - Mask Prompt: Describe the area to preserve (e.g., "the main subject")
- Prompt: Describe what to generate in the extended area
- Outpainting Mode:
Default(allows blending) orPrecise(strict boundary)
- Image Task Type: Select
Create variations of an existing image:
- Provide a source image.
- Configure:
- Image Task Type: Select
Image Variation - Similarity Strength: 0.2 (more variation) to 1.0 (more similar to original)
- Prompt (optional): Guide the variation direction
- Image Task Type: Select
Remove the background from an image (outputs transparent PNG):
- Provide a source image.
- Configure:
- Image Task Type: Select
Background Removal - No prompt needed - the model automatically detects and removes the background
- Image Task Type: Select
For Inpainting and Outpainting, you can specify the mask in two ways:
- Mask Prompt (recommended): A text description of the area to mask (e.g., "the sky", "the person's face")
- Mask Image: A binary black/white image where:
- Black pixels = area to modify
- White pixels = area to preserve
If both are provided, the Mask Image takes precedence.
For more details on image editing capabilities, see:
Configure image generation models in your credentials JSON just like Claude models:
{
"us.anthropic.claude-3-5-sonnet-20241022-v2:0": "0xumpou8xusv",
"amazon.nova-canvas-v1:0": "b3tcu2bezmae",
"amazon.titan-image-generator-v2:0": "12fut6sh2vgi"
}The AWS Bedrock (AssumeRole) standalone node returns a JSON object with:
{
"modelId": "us.anthropic.claude-3-5-sonnet-20241022-v2:0",
"prompt": "Your original prompt",
"response": {
"content": [
{
"text": "The AI response text",
"type": "text"
}
],
"usage": {
"input_tokens": 25,
"output_tokens": 150
}
},
"usage": {
"input_tokens": 25,
"output_tokens": 150
},
"content": "The AI response text",
"timestamp": "2024-11-12T17:46:00.000Z"
}| Feature | AWS Bedrock Chat Model | AWS Bedrock (AssumeRole) |
|---|---|---|
| Use Case | AI Agent workflows | Direct API calls |
| Tool Calling | ✅ Yes (via AI Agent) | ❌ No |
| Conversation Memory | ✅ Yes (via AI Agent) | ❌ No |
| Image Analysis | ❌ Not yet supported | ✅ Yes |
| Image Generation | ❌ No | ✅ Yes (Nova Canvas, Titan Image) |
| Batch Processing | ❌ No | ✅ Yes |
| Structured Output | ✅ Yes (via AI Agent) | |
| Best For | Conversational AI, agents with tools | Simple prompts, image analysis/generation, batch jobs |
- Node.js 18+
- npm or yarn
- TypeScript
# Clone the repository
git clone https://github.com/cabify/n8n-nodes-aws-bedrock-assumerole.git
cd n8n-nodes-aws-bedrock-assumerole
# Install dependencies
npm install
# Build the project
npm run build
# Run linting
npm run lint
# Run tests
npm testn8n-nodes-aws-bedrock-assumerole/
├── credentials/
│ └── AwsAssumeRole.credentials.ts # AWS AssumeRole credential definition
├── nodes/
│ └── AwsBedrockAssumeRole.node.ts # Main node implementation
├── icons/
│ ├── aws.svg # AWS credential icon
│ └── bedrock.svg # Bedrock node icon
├── dist/ # Compiled JavaScript (generated)
├── package.json # Package configuration
├── tsconfig.json # TypeScript configuration
├── .eslintrc.js # ESLint configuration
├── .prettierrc # Prettier configuration
└── README.md # This file
- Ensure AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are set as environment variables
- Or fill in the credential fields in the n8n UI
- Verify the Role ARN is correct
- Check that the base credentials have
sts:AssumeRolepermission - Ensure the target role trusts the base account/role
- Verify the assumed role has
bedrock:InvokeModelpermission - Check that the model ID is available in your AWS region
- Ensure Bedrock is enabled in your AWS account
- Restart n8n after installation
- Check that the package is installed in the correct location
- Verify the package.json n8n configuration is correct
The node provides detailed console logging. Check your n8n logs for:
[AWS Bedrock] Resolved credentials[AWS Bedrock] AssumeRole successful[AWS Bedrock] Model response received
This project is developed and maintained by:
This project includes a Makefile for easy development and deployment:
# Show all available commands
make help
# Development
make install # Install dependencies
make build # Build the project
make dev # Build and start Docker for local testing
make clean # Clean build artifacts
# Docker
make docker-up # Start Docker containers
make docker-down # Stop Docker containers
make docker-logs # Show Docker logs
# Deployment
make publish # Publish to npm (interactive)
make sync # Sync repositories (GitHub + GitLab)
make release # Full release: build + publish + syncThe make publish command provides an interactive workflow that handles everything:
Choose the type of version bump:
- patch (1.0.2 → 1.0.3) - Bug fixes
- minor (1.0.2 → 1.1.0) - New features (backwards compatible)
- major (1.0.2 → 2.0.0) - Breaking changes
- custom - Specify version manually
Select the types of changes included:
- Added - New features
- Changed - Changes in existing functionality
- Deprecated - Soon-to-be removed features
- Removed - Removed features
- Fixed - Bug fixes
- Security - Security fixes
Enter detailed changes for each selected section. The script will automatically:
- Update
CHANGELOG.mdwith proper formatting - Follow Keep a Changelog format
- Add the current date
- Insert the new entry at the top
The script will:
- Build the project (
npm run build) - Publish to npm with public access
- Commit changes to
package.json,package-lock.json, andCHANGELOG.md - Create a git tag (e.g.,
v1.0.2)
# Start the publish process
make publish
# Follow the prompts:
# 1. Select version bump: 1 (patch)
# 2. Select change types: 5 (Fixed)
# 3. Enter changes:
# - Fixed custom SVG icons not displaying correctly
# - Removed unused code and imports
# 4. Confirm publish: y
# After publishing, sync repositories
make sync
# Or do everything in one command:
make releaseThe project supports syncing to multiple repositories:
- GitHub: https://github.com/cabify/n8n-nodes-aws-bedrock-assumerole
- GitLab: https://gitlab.otters.xyz/platform/business-automation/n8n-nodes-aws-bedrock-assumerole
The make sync command will:
- Push code to both GitHub and GitLab
- Push all tags to both repositories
- Verify you're on the main branch
- Show current status before pushing
If you prefer not to use Make:
# Install dependencies
npm install
# Build
npm run build
# Start Docker for testing
docker-compose up -d
# View logs
docker-compose logs -f n8n
# Publish manually
npm version patch # or minor, major
npm run build
npm publish --access public
git push && git push --tagsn8n-bedrock-node/
├── credentials/
│ ├── AwsAssumeRole.credentials.ts # Credential definition
│ └── aws.svg # AWS icon
├── nodes/
│ ├── AwsBedrockAssumeRole.node.ts # Main node implementation
│ └── bedrock.svg # Bedrock icon
├── icons/
│ ├── aws.svg # Source AWS icon
│ └── bedrock.svg # Source Bedrock icon
├── dist/ # Compiled output
├── docker-compose.yml # Local development setup
├── Makefile # Development commands
├── publish-npm.sh # npm publish script
├── sync-repos.sh # Repository sync script
└── package.json # Package configuration
npm run build- Compile TypeScript and copy iconsnpm run copy-icons- Copy icons to dist directoriesnpm run lint- Run ESLint (requires setup)npm test- Run tests (if available)
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- 📧 Email: business-automation@cabify.com
- 🐛 Issues: GitHub Issues
- 📖 n8n Documentation: n8n.io/docs
- Built for the n8n workflow automation platform
- Uses AWS SDK v3 for optimal performance
- Inspired by the need for secure cross-account AWS Bedrock access
