Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Central_v2] - User Sign Up - Teacher IDs #1366

Open
drewjhart opened this issue Nov 27, 2024 · 1 comment
Open

[Central_v2] - User Sign Up - Teacher IDs #1366

drewjhart opened this issue Nov 27, 2024 · 1 comment
Assignees

Comments

@drewjhart
Copy link
Contributor

drewjhart commented Nov 27, 2024

The final piece of the User puzzle is handling the upload of Teacher IDs, in this part of the Sign Up page:

Image

Security:
Short of student data, this represents the most sensitive data that we will be handling at RightOn and therefore, security is paramount. We definitely want to be making sure that we offload all the authorization of this information to AWS and aren't trying to handle anything ourselves.

In terms of the actual authorization required, we can follow the same properties as the User table. We want to restrict access only to individuals that actually own the data. In this case, that means that users should only ever be able to access their own IDs. They should never have access to other teachers ids.

Amplify:
Amplify provides out-of-the-box integration with S3 via amplify add storage. However, we are on amplify gen1 there is no way to add multiple storage buckets and we are already using amplify add storage to handle game and question template images (which has a lot higher frequency of use). Additionally, updating to amplify gen 2 will break our backend and AWS advises to wait until they have automigration tools released.

All this is to say we can't use amplify add storage for this.

Lambda:
The alternative then is to set up an S3 bucket on our backend, and then write Lambda functions to securely manage the access of the images. We can integrate as many Lambda functions as we want seamlessly in amplify via amplify add function and because all the code in a Lambda function is run in AWS we can be confident that it will be secure. Ideally, we'll be looking to set up some API functions in networking that will send our auth credentials to the Lambda function and send an either putS3Object or getS3Object response.

We can use gpt-o1-mini to give us a starting point for how to write the lambda function and hook up the S3 integration and our auth credentials. First, we're going to run amplify add function to create a new Node.js lambda function. We're also (I think) going to need to create an S3 bucket on the console (because we can't use amplify for this)

@drewjhart
Copy link
Contributor Author

drewjhart commented Nov 27, 2024

Here's a basic output from gpt-o1-mini as a starting point:

Overview
Authentication Setup: Ensure that AWS Amplify Auth is correctly configured in your project.
API Gateway Configuration: Set up API Gateway with appropriate authorizers to pass user identity to Lambda.
Lambda Function: Create a Lambda function that:
Extracts the authenticated user's identity.
Validates ownership of the requested S3 object.
Performs the desired S3 operation (e.g., GetObject, PutObject, DeleteObject).
IAM Roles and Permissions: Ensure that the Lambda function has the necessary permissions to interact with S3 and that API Gateway is secured.

Step 2: Set Up API Gateway with Lambda Integration
You can add an API to your Amplify project that integrates with the Lambda function we'll create:

Step 3: Create the Lambda Function
Below is an example of a Lambda function written in Node.js that allows users to perform Get, Put, and Delete operations on their own S3 objects.

Assumptions
S3 Bucket Structure: Each user has a dedicated folder in the S3 bucket named after their unique user ID (e.g., your-bucket-name/{userId}/).
Object Keys: When users upload objects, they are stored under their respective folders.
Cognito Identity: The authenticated user's unique identifier is available in the Cognito Identity Pool as sub.
Lambda Function Code

// Filename: index.js

const AWS = require('aws-sdk');
const S3 = new AWS.S3();

// Replace with your S3 bucket name
const BUCKET_NAME = 'your-bucket-name';

exports.handler = async (event) => {
    try {
        // Extract the HTTP method and query parameters
        const httpMethod = event.httpMethod;
        const queryParams = event.queryStringParameters;
        const pathParameters = event.pathParameters;
        const body = event.body ? JSON.parse(event.body) : null;

        // Extract user identity from the Cognito Authorizer
        const userId = event.requestContext.authorizer.claims.sub; // Unique user ID from Cognito

        // Define allowed operations
        const allowedMethods = ['GET', 'PUT', 'DELETE'];
        if (!allowedMethods.includes(httpMethod)) {
            return {
                statusCode: 405,
                body: JSON.stringify({ message: 'Method Not Allowed' }),
            };
        }

        // Extract the object key from path or query parameters
        // Assuming the object key is passed as a path parameter, e.g., /s3object/{key}
        const objectKey = pathParameters && pathParameters.key
            ? decodeURIComponent(pathParameters.key)
            : null;

        if (!objectKey) {
            return {
                statusCode: 400,
                body: JSON.stringify({ message: 'Bad Request: Missing object key.' }),
            };
        }

        // Ensure the object key starts with the user's ID to enforce ownership
        if (!objectKey.startsWith(`${userId}/`)) {
            return {
                statusCode: 403,
                body: JSON.stringify({ message: 'Forbidden: You do not have access to this object.' }),
            };
        }

        // Define S3 operation based on HTTP method
        let response;
        switch (httpMethod) {
            case 'GET':
                // Get the object from S3
                response = await S3.getObject({
                    Bucket: BUCKET_NAME,
                    Key: objectKey,
                }).promise();

                return {
                    statusCode: 200,
                    headers: {
                        'Content-Type': response.ContentType,
                        'Content-Disposition': `attachment; filename="${objectKey.split('/').pop()}"`,
                    },
                    body: response.Body.toString('base64'),
                    isBase64Encoded: true,
                };

            case 'PUT':
                // Upload the object to S3
                if (!body || !body.fileContent || !body.contentType) {
                    return {
                        statusCode: 400,
                        body: JSON.stringify({ message: 'Bad Request: Missing file content or content type.' }),
                    };
                }

                const buffer = Buffer.from(body.fileContent, 'base64');

                await S3.putObject({
                    Bucket: BUCKET_NAME,
                    Key: objectKey,
                    Body: buffer,
                    ContentType: body.contentType,
                }).promise();

                return {
                    statusCode: 200,
                    body: JSON.stringify({ message: 'File uploaded successfully.' }),
                };

            case 'DELETE':
                // Delete the object from S3
                await S3.deleteObject({
                    Bucket: BUCKET_NAME,
                    Key: objectKey,
                }).promise();

                return {
                    statusCode: 200,
                    body: JSON.stringify({ message: 'File deleted successfully.' }),
                };

            default:
                return {
                    statusCode: 405,
                    body: JSON.stringify({ message: 'Method Not Allowed' }),
                };
        }

    } catch (error) {
        console.error('Error processing request:', error);
        return {
            statusCode: 500,
            body: JSON.stringify({ message: 'Internal Server Error' }),
        };
    }
};

Explanation of the Lambda Function
Imports and Configuration:

AWS SDK: Used to interact with S3.
BUCKET_NAME: Replace 'your-bucket-name' with your actual S3 bucket name.
Handler Function:

HTTP Method & Parameters: Determines the operation based on the HTTP method (GET, PUT, DELETE) and retrieves necessary parameters.
User Identity Extraction:
Extracts the authenticated user's unique identifier (sub) from the Cognito Authorizer claims.
Note: Ensure that API Gateway is configured with Cognito User Pool Authorizer to pass these claims.
Ownership Enforcement:

Object Key Validation: Ensures that the requested object key starts with the user's ID (userId/), thereby restricting access to only their objects.
Example: If userId is 12345, a valid objectKey would be 12345/myfile.txt.
S3 Operations:

GET:
Retrieves the object from S3.
Returns the object content as a base64-encoded string with appropriate headers to facilitate download.
PUT:
Expects fileContent (base64-encoded) and contentType in the request body.
Uploads the object to S3 under the user's folder.
DELETE:
Deletes the specified object from S3.
Error Handling:

Catches and logs errors, returning a 500 Internal Server Error status code if something goes wrong.
Configuring API Gateway to Pass Cognito User Identity
To ensure that the Lambda function receives the authenticated user's identity, configure API Gateway with a Cognito User Pool Authorizer:

Navigate to API Gateway in the AWS Console.
Select Your API and go to Authorizers.
Create a New Authorizer:
Type: Cognito
Name: e.g., CognitoAuthorizer
User Pool: Select the Cognito User Pool associated with your Amplify Auth.
Attach the Authorizer to Your API Methods:
For each method (GET, PUT, DELETE), enable the Cognito Authorizer.
This ensures that only authenticated requests reach the Lambda function, and the user's identity is available in event.requestContext.authorizer.claims.
Step 4: IAM Roles and Permissions
Ensure that your Lambda function has the necessary permissions to interact with S3:

Lambda Execution Role:

Navigate to the IAM console.
Find the role associated with your Lambda function (created automatically by Amplify).
Attach the following policy or ensure similar permissions are present:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:DeleteObject"
            ],
            "Resource": "arn:aws:s3:::your-bucket-name/*"
        }
    ]
}

Replace your-bucket-name with your actual bucket name.
API Gateway Execution Permissions:

Amplify typically handles this automatically, but ensure that API Gateway can invoke your Lambda function.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants