W
Wasp8mo ago
David

File upload size limit

I'm trying to implement max file upload size to the s3 bucket. I've handled the front-end but I worry that more advanced users could bypass this and spam my s3 bucket. Here is what I've been trying on the backend (just added the conditions) but it still uploads files greater than MAX_UPLOAD_SIZE (1 * 1024 * 1024):
export const getUploadFileSignedURLFromS3 = async ({fileType, userInfo}: S3Upload) => {
const ex = fileType.split('/')[1];
const Key = `${userInfo}/${randomUUID()}.${ex}`;
const Conditions = [
["content-length-range", 1, MAX_UPLOAD_SIZE]
];

const s3Params = {
Bucket: process.env.AWS_S3_FILES_BUCKET,
Key,
Conditions,
ContentType: `${fileType}`
};

const command = new PutObjectCommand(s3Params);
const uploadUrl = await getSignedUrl(s3Client, command, { expiresIn: 3600, });

return { uploadUrl, key: Key };
}
export const getUploadFileSignedURLFromS3 = async ({fileType, userInfo}: S3Upload) => {
const ex = fileType.split('/')[1];
const Key = `${userInfo}/${randomUUID()}.${ex}`;
const Conditions = [
["content-length-range", 1, MAX_UPLOAD_SIZE]
];

const s3Params = {
Bucket: process.env.AWS_S3_FILES_BUCKET,
Key,
Conditions,
ContentType: `${fileType}`
};

const command = new PutObjectCommand(s3Params);
const uploadUrl = await getSignedUrl(s3Client, command, { expiresIn: 3600, });

return { uploadUrl, key: Key };
}
19 Replies
kapa.ai
kapa.ai8mo ago
Hi @David. I'm kapa.ai, your friendly support bot. I will try to answer your question by searching through the knowledge base. :wave: ---------------------- Hmm, I don't know enough to give you a confident answer yet. The knowledge sources provided do not contain information about setting a maximum file upload size for an S3 bucket in the context of Wasp. However, you might find it helpful to refer to the AWS S3 documentation or the Prisma documentation for more information on this topic. For Wasp-related queries, you can refer to the Wasp documentation (https://wasp-lang.dev/docs) or ask the community on the Wasp Discord server (https://discord.com/invite/rzdnErX). ---------------------- Note: you can ask me a follow up question by @ mentioning me again :speech_balloon: ---------------------- Sorry @David, I seem to be having a bit of trouble. Don't worry though, I've pinged my support and they will try to fix me up! :technologist: Perhaps try again in 15mins?
David
DavidOP8mo ago
Forgot to mention, I'm trying to avoid the process of uploading file to my server -> confirming it's size -> uploading to s3 bucket. I'd like to keep it a single upload to s3.
MEE6
MEE68mo ago
Wohooo @David, you just became a Waspeteer level 1!
miho
miho8mo ago
I'm not an AWS expert, but I see online people mentioning createPresignedPost as a way to use the conditions: https://zaccharles.medium.com/s3-uploads-proxies-vs-presigned-urls-vs-presigned-posts-9661e2b37932 They say getSignedUrl doesn't really honor the conditions 🤷‍♂️ Alternatively, you could upload the file to the server and then from the server to S3: https://gist.github.com/infomiho/ec379df4e33f3ae3410a251ba3aa81af
Medium
S3 Uploads — Proxies vs Presigned URLs vs Presigned POSTs
What’s the best way to upload a file from a browser to S3? This post compares several options and provides an example codebase.
Gist
Uploading files with Wasp 0.12.3
Uploading files with Wasp 0.12.3. GitHub Gist: instantly share code, notes, and snippets.
David
DavidOP8mo ago
Yeah I ended up trying to implement createPresignedPost, gonna finish it later today. I'm worried about the security thought. Is the POST just as safe as PUT or perhaps there are some important headers I should include with the presigned post? actually I followed the exact same blog post you linked 🤣 thank you for your time! @miho
miho
miho8mo ago
POST vs PUT, it's just HTTP method names and they don't have any security implications by themselves, so I'm eager to say don't worry about it
David
DavidOP8mo ago
Just got back from work and this is what seems to work (including the max file upload limit):
Actions.ts:
type FileReturnType = {fileEntity: FileEntity, fields: any}

export const createFile: CreateFile<FileArgs, FileReturnType> = async ({ fileType, name }, context) => {
if (!context.user) {
throw new HttpError(401);
}

const userInfo = context.user.id.toString();
const { key, uploadUrl, fields } = await getUploadFileSignedURLFromS3({ fileType, userInfo });

const fileEntity = await context.entities.File.create({
data: {
name,
key,
uploadUrl,
type: fileType,
user: { connect: { id: context.user.id } },
},
});

return { fileEntity, fields };
};
Actions.ts:
type FileReturnType = {fileEntity: FileEntity, fields: any}

export const createFile: CreateFile<FileArgs, FileReturnType> = async ({ fileType, name }, context) => {
if (!context.user) {
throw new HttpError(401);
}

const userInfo = context.user.id.toString();
const { key, uploadUrl, fields } = await getUploadFileSignedURLFromS3({ fileType, userInfo });

const fileEntity = await context.entities.File.create({
data: {
name,
key,
uploadUrl,
type: fileType,
user: { connect: { id: context.user.id } },
},
});

return { fileEntity, fields };
};
s3Utils.ts:
type ContentLengthRangeCondition = ["content-length-range", number, number];
type Condition = ContentLengthRangeCondition | { "Content-Type": string };

export const getUploadFileSignedURLFromS3 = async ({fileType, userInfo}: S3Upload) => {
const ex = fileType.split('/')[1];
const Key = `${userInfo}/${randomUUID()}.${ex}`;

const Conditions: Condition[] = [
["content-length-range", 1, MAX_UPLOAD_SIZE] as ContentLengthRangeCondition,
{ "Content-Type": fileType }
];

const Fields = {
"Content-Type": fileType,
"key": Key,
};

const bucket = process.env.AWS_S3_FILES_BUCKET;

if (bucket == undefined) {
throw new Error("Bucket name is not set");
}

const presignedPost = await createPresignedPost(s3Client, {
Bucket: bucket,
Key,
Fields,
Conditions,
Expires: 3600,
});

return { key: Key, uploadUrl: presignedPost.url, fields: presignedPost.fields };
}
s3Utils.ts:
type ContentLengthRangeCondition = ["content-length-range", number, number];
type Condition = ContentLengthRangeCondition | { "Content-Type": string };

export const getUploadFileSignedURLFromS3 = async ({fileType, userInfo}: S3Upload) => {
const ex = fileType.split('/')[1];
const Key = `${userInfo}/${randomUUID()}.${ex}`;

const Conditions: Condition[] = [
["content-length-range", 1, MAX_UPLOAD_SIZE] as ContentLengthRangeCondition,
{ "Content-Type": fileType }
];

const Fields = {
"Content-Type": fileType,
"key": Key,
};

const bucket = process.env.AWS_S3_FILES_BUCKET;

if (bucket == undefined) {
throw new Error("Bucket name is not set");
}

const presignedPost = await createPresignedPost(s3Client, {
Bucket: bucket,
Key,
Fields,
Conditions,
Expires: 3600,
});

return { key: Key, uploadUrl: presignedPost.url, fields: presignedPost.fields };
}
front-end:
const handleSubmit = async (event: ChangeEvent<HTMLFormElement>) => {
event.preventDefault();

setLoadingMessage("Uploading...");
setIsLoading(true);

try {
const file = fileRef.current?.files?.[0];

if (file === undefined) {
window.alert('No file selected');
setIsLoading(false);
setLoadingMessage(null);
return;
}

const { fileEntity, fields } = await createFile({ fileType: file.type, name: file.name });

if (fileEntity === null || fileEntity.uploadUrl === null) {
throw new Error('Failed to get upload URL');
}

const formData = new FormData();

Object.keys(fields).forEach(key => {
formData.append(key, fields[key]);
});

formData.append('file', file);

const result = await axios.post(fileEntity.uploadUrl, formData);

if (result.status !== 200 && result.status !== 204) {
throw new Error('File upload failed');
}

// Success
} catch (error) {
console.error('Error uploading file', error);
alert('Error uploading file. Please try again');
}
finally {
setIsLoading(false);
setLoadingMessage(null);
}
};
front-end:
const handleSubmit = async (event: ChangeEvent<HTMLFormElement>) => {
event.preventDefault();

setLoadingMessage("Uploading...");
setIsLoading(true);

try {
const file = fileRef.current?.files?.[0];

if (file === undefined) {
window.alert('No file selected');
setIsLoading(false);
setLoadingMessage(null);
return;
}

const { fileEntity, fields } = await createFile({ fileType: file.type, name: file.name });

if (fileEntity === null || fileEntity.uploadUrl === null) {
throw new Error('Failed to get upload URL');
}

const formData = new FormData();

Object.keys(fields).forEach(key => {
formData.append(key, fields[key]);
});

formData.append('file', file);

const result = await axios.post(fileEntity.uploadUrl, formData);

if (result.status !== 200 && result.status !== 204) {
throw new Error('File upload failed');
}

// Success
} catch (error) {
console.error('Error uploading file', error);
alert('Error uploading file. Please try again');
}
finally {
setIsLoading(false);
setLoadingMessage(null);
}
};
Also had to update my AWS CORS to allow POST:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"POST"
],
"AllowedOrigins": [
"http://localhost:3000"
],
"ExposeHeaders": [],
"MaxAgeSeconds": 3600
}
]
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"POST"
],
"AllowedOrigins": [
"http://localhost:3000"
],
"ExposeHeaders": [],
"MaxAgeSeconds": 3600
}
]
Also you need to manually install "@aws-sdk/s3-presigned-post" as it doesn't come with Open SaaS by default. Leaving this here in case someone finds it useful in the future. P.S. I've been writing C# for past 10 years so pardon my poor typescript 😄 Also how do you go about handling the dev & production enviroments with CORS? Do I just set up 2 aws buckets one for dev that allows localhost:3000 and another one for production that allows the actual domain?
miho
miho8mo ago
@kapa.ai Also how do you go about handling the dev & production enviroments with CORS? Do I just set up 2 aws buckets one for dev that allows localhost:3000 and another one for production that allows the actual domain?
kapa.ai
kapa.ai8mo ago
Let me check the knowledge base for you. One moment! :books: ---------------------- Yes, you're on the right track. For handling different environments with CORS, you can indeed set up two separate AWS buckets, one for development and one for production. In the development environment, you can allow localhost:3000 and in the production environment, you can allow your actual domain. You can manage this by setting the allowed origins in your CORS configuration based on the environment. This can be done by adding environment variables in your .env.server file and using these variables in your CORS configuration. Here's an example of how you can customize global middleware to add extra domains to CORS:
import cors from 'cors'
import { config } from 'wasp/server'

export const serverMiddlewareFn = (middlewareConfig) => {
// Example of adding extra domains to CORS.
middlewareConfig.set('cors', cors({ origin: [config.frontendUrl, '<https://example1.com',> '<https://example2.com']> }))
return middlewareConfig
}
import cors from 'cors'
import { config } from 'wasp/server'

export const serverMiddlewareFn = (middlewareConfig) => {
// Example of adding extra domains to CORS.
middlewareConfig.set('cors', cors({ origin: [config.frontendUrl, '<https://example1.com',> '<https://example2.com']> }))
return middlewareConfig
}
In this example, config.frontendUrl could be your development or production domain, depending on the environment. You can set this value in your .env.server file. For more details, you can refer to the Wasp documentation on configuring middleware (https://wasp-lang.dev/docs/advanced/middleware-config#1-customize-global-middleware) and the Wasp documentation on accessing app config (https://wasp-lang.dev/docs/advanced/accessing-app-config#server-configuration-object). Remember to keep your development and production environments as similar as possible to avoid unexpected behavior when deploying your application. ---------------------- Note: you can ask me a follow up question by @ mentioning me again :speech_balloon: ---------------------- Sorry @miho, I seem to be having a bit of trouble. Don't worry though, I've pinged my support and they will try to fix me up! :technologist: Perhaps try again in 15mins?
miho
miho8mo ago
Hm, not really what I was going for 😄
David
DavidOP8mo ago
I like this kapa guy, he gets me 😄
miho
miho8mo ago
Can't you set two domains for one single bucket? 🙂 That would work? But in a lot of places I worked, there were separate buckets for dev and prod just to ensure nothing got deleted by accident when developing stuff
David
DavidOP8mo ago
yeah, but couldn't users "hack" the bucket by sending requests from their own localhost:3000 or smth? I guess I'm just paranoid about the security as I'm quite inexperienced with web dev.
miho
miho8mo ago
by sending requests from their own localhost:3000 or smth
That's why you have presigned URLs that enable them to do a specific action (which you validated they can do on the server)
David
DavidOP8mo ago
alright, fair enough.
miho
miho8mo ago
But, it doesn't hurt to have multilpe layers of security 😄 if something bad happens like your private (dev) key leaks, you are protected to some degree
David
DavidOP8mo ago
good point, gonna do this once I get to prod
MEE6
MEE68mo ago
Wohooo @David, you just became a Waspeteer level 2!
David
DavidOP8mo ago
Thanks for the help!
Want results from more Discord servers?
Add your server