我已经通过 AWS Cloud 9 创建了一个 Lambda Python 函数,但在尝试从 Lambda 函数写入 S3 存储桶时遇到了问题。当我在 Cloud 9 中测试时,Python 代码运行良好,并完美地写入 S3 存储桶。当我将其推送到 Lambda 函数并运行时,我认为会出现错误。这让我认为在 AWS Cloud 9 中运行应用程序所使用的角色与 Lambda 函数运行时的权限有所不同。
给出了以下错误,我正在寻找一些关于我可能遗漏的内容的建议,在错误下面我描述了设置:
[ERROR] ClientError: An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
Traceback (most recent call last):
File "/var/task/index.py", line 22, in handler
s3.Bucket(bucket_name).put_object(Key=s3_path, Body=encoded_string)
File "/var/runtime/boto3/resources/factory.py", line 520, in do_action
response = action(self, *args, **kwargs)
File "/var/runtime/boto3/resources/action.py", line 83, in __call__
response = getattr(parent.meta.client, operation_name)(**params)
File "/var/runtime/botocore/client.py", line 320, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/var/runtime/botocore/client.py", line 623, in _make_api_call
raise error_class(parsed_response, operation_name)
我的代码如下:
import json
import datetime
from botocore.vendored import requests
import boto3
def handler(event, context):
print("Start:")
response = requests.get('https://##########')
data = response.json()
for i in data:
print (i)
encoded_string = json.dumps(i).encode("utf-8")
bucket_name = "data"
file_name = str(i['id']) + ".txt"
lambda_path = "/tmp/" + file_name
s3_path = "testBucket/" + file_name
s3 = boto3.resource("s3")
s3.Bucket(bucket_name).put_object(Key=s3_path, Body=encoded_string)
...rest of code ...
具有必要权限的.yml文件如下:
AWSTemplateFormatVersion: 2010-09-09
Transform:
- AWS::Serverless-2016-10-31
- AWS::CodeStar
Parameters:
ProjectId:
Type: String
Description: CodeStar projectId used to associate new resources to team members
CodeDeployRole:
Type: String
Description: IAM role to allow AWS CodeDeploy to manage deployment of AWS Lambda functions
Stage:
Type: String
Description: The name for a project pipeline stage, such as Staging or Prod, for which resources are provisioned and deployed.
Default: ''
Globals:
Function:
AutoPublishAlias: live
DeploymentPreference:
Enabled: true
Type: Canary10Percent5Minutes
Role: !Ref CodeDeployRole
Resources:
HelloWorld:
Type: AWS::Serverless::Function
Properties:
Handler: index.handler
Runtime: python3.7
Timeout: 10
Role:
Fn::GetAtt:
- LambdaExecutionRole
- Arn
Events:
GetEvent:
Type: Api
Properties:
Path: /
Method: get
PostEvent:
Type: Api
Properties:
Path: /
Method: post
LambdaExecutionRole:
Description: Creating service role in IAM for AWS Lambda
Type: AWS::IAM::Role
Properties:
RoleName: !Sub 'CodeStar-${ProjectId}-Execution${Stage}'
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: [lambda.amazonaws.com]
Action: sts:AssumeRole
Path: /
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
PermissionsBoundary: !Sub 'arn:${AWS::Partition}:iam::${AWS::AccountId}:policy/CodeStar_${ProjectId}_PermissionsBoundary'
Lambda 函数执行的角色如下: IAM 角色
任何关于哪里可能出错的建议。我知道我需要提供正确的访问权限,但我不确定我还需要在哪里指定正确的访问权限(我真的不想将我的 S3 公开,只是为了让我的 Lambda 函数可以访问它)。在 Lambda 中,它显示 S3 已被添加为函数角色可以访问的资源,但仍收到上述错误。
答案1
我相信解决方案应该很简单,只需改变你的LambdaExecutionRole
更改为:
LambdaExecutionRole:
Description: Creating service role in IAM for AWS Lambda
Type: AWS::IAM::Role
Properties:
RoleName: !Sub 'CodeStar-${ProjectId}-Execution${Stage}'
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: [lambda.amazonaws.com]
Action: sts:AssumeRole
Path: /
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
- arn:aws:iam::aws:policy/AmazonS3FullAccess # <== Add
# PermissionsBoundary: !Sub ... # <== Comment out
如果此方法有效,您可以尝试将 S3 权限限制到特定存储桶,但首先尝试添加亚马逊S3FullAccess政策和注释权限边界。
希望有帮助:)