How I Built a File Watcher Program After Forgetting Where I Saved My AWS S3 Files

Written by rickdelpo | Published 2023/10/11
Tech Story Tags: aws-s3 | aws-s3-bucket | dns-migration | remote-s3-bucket | amazon-web-services | aws-cloudfront-migration | website-hosting-in-s3 | local-s3-repository

TLDRPart 1: In the first part of the series, the author discusses the challenges of migrating from an Apache server to AWS CloudFront. They emphasize the importance of configuring CloudFront settings correctly, addressing CORS issues in both S3 and CloudFront, and using AWS Route53 for DNS management. The article also touches on SEO strategy adjustments, handling file extensions, and leveraging CloudFront functions for achieving pretty URLs, index behavior in subfolders, and enhancing security headers in responses. Part 2: In this installment, the author provides corrections and insights into their AWS S3 and CloudFront setup. They emphasize the need to maintain old canonical URLs and clarify the misconceptions surrounding HTTP status codes 301 and 304. The article also discusses the advantages of using an AWS SSL certificate, managing DNS records, and highlights the importance of having separate S3 buckets for www versions. It addresses the nuances of SEO in the AWS environment and advises against unnecessary re-SEO efforts. Part 3: This part delves into how to make AWS S3 behave more like a web server by using CloudFront Functions. The author focuses on three aspects: managing pretty URLs and handling redirects, enabling default index behavior for subfolders, and configuring response security headers. They explain the importance of these functions in enhancing the S3 and CloudFront setup to mimic an Apache server more effectively, making it suitable for hosting websites.via the TL;DR App


AWS CloudFront Migration Issues - Hosting Your Website in S3 - a 4-Part Story

1 AWS Cloudfront DNS Migration Gotchas that no one tells you about

2 AWS Gotchas Part 2 - corrections from part 1 plus new gotchas, A MUST Read

3 Enabling AWS S3 to behave more like a Web Server

4 Forgot where I saved my AWS S3 files so I wrote this File Watcher Program

We’ll do something a little unusual and start with Part 4

Not to worry, parts 1 through 3 can be found below.

Part 4 - Forgot where I saved my AWS S3 files so I wrote this File Watcher Program

The File Watcher in question is one that watches my local S3 repository for changes. When any file is changed my AWS S3 Bucket file is updated magically with the below code. With a simple NodeJS script, just save as myfile.js, open the CMD prompt, and enter node myfile.js to run.

I have not yet found a tutorial that includes it all correctly. Hint, it’s all about pre-setting content-type.

Most may use GitHub as their website repository but for aws S3 any old local folder will do. An inconvenience of hosting on s3 is that you can't edit your HTML files right in the bucket. So after a few months with no edits, suddenly I asked myself: where did I save my files, and most importantly is this folder my most current version?

more below...

const AWS = require('aws-sdk'); //add these 2 dependencies
const fs = require('fs');

var file_name;
      //next line polls the origin folder while program is open
 const WATCH_TARGET = 'C:/Users/rickd/Downloads/backup 6-4-23/howtolearnjava.com/other/';
 fs.watch(WATCH_TARGET, (eventType, filename) => {
 console.log('File "' + filename + '" was changed: ' + eventType);
 file_name = filename;

if (eventType == 'change') {
       // a file change occurred
       //get aws user credentials
const s3 = new AWS.S3({
  accessKeyId: 'xxxxxxxxxxxxxxxxxxxxxxxxx',
  secretAccessKey: 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
});

const bucketName = 'howtolearnjava.com';//need to define this here
const fileName2 = file_name;  //must be defined here
      // this is the origin file that we are changing
const origin_file = 'C:/Users/rickd/Downloads/backup 6-4-23/howtolearnjava.com/other/'+fileName2;
      //needs full path otherwise no such file or dir shows as error                                                                                                                                                                                                         //no fancy concat anymore...dont put syntax in the path
      //******very important..this program will not overwrite s3 file
      //if content type is absent (in cf and perhaps always)
      //here is where we read our file
const fileData = fs.readFileSync(origin_file);

s3.upload({
  Bucket: bucketName,
  Key: file_name, //this is only the name of the file
        // next line is ContentType: 'valid content type', //this is very major,
        //most tutorials leave this part out
        //when hosting on cloudfront default content type is
        //octetstream which causes file to download
        //to overcome we MUST specify content type on creation of file
  ContentType: 'text/html',
        //set content type here in top level params not in above const
  Body: fileData 

}, (err, data) => {
  if (err) {
    console.error(err);
  } else {
    console.log(`File uploaded successfully. ${data.Location}`);
  }
});

  }    //end if changed condition

});

Caveats

  1. AWS Lambda cannot access the local Windows file system, making this task unfeasible.

  2. Be cautious that you MUST specify the content type when creating an HTML file. Failing to do so will result in the default octet-stream format, causing a file download and potentially overwriting existing content (a known CloudFront issue).

  3. It is imperative to utilize user credentials for S3 access. Instructions for this can be found here.

  4. Upon initiating the program, a message will alert you that in 2023, it will be necessary to use AWS SDK (v3) instead of v2. As of now, this code may not function in a few months.

  5. It's advisable to refrain from keeping S3 open while running this program, as it has been known to cause crashes. Although no damage has been reported so far, it's a potential concern. It's best to verify your files by accessing them through HTTP instead.

My Continuing Saga

Feeling concerned, I realized I could access my bucket and download the files manually. However, this process proved cumbersome, especially when attempting to select multiple files for download, as the download option disappeared.

So, I found a solution in the form of "fs.watch," a folder watcher. This handy tool automatically saves any changes to S3 whenever a file change is detected. By the way, it's important to note that you also need to manually copy updated files and paste them into your S3 bucket every time you make a change. I must have forgotten to mention this earlier. Despite these additional steps, I can say that my experience with managing S3 became much smoother once I implemented this program.

The program itself is quite simple, just a few lines of Node.js code. When executed, it sets up a listener that waits for updates. You only need to keep the program running while making changes to HTML files. When a file change is detected, the program sends the file name to an AWS upload function, which then uploads it to S3, overwriting the existing file. All you need to remember is the name of the Node.js file. Just open your command prompt, enter "node my_program.js," make your HTML updates, and the rest is like magic.

Part 1 - AWS Cloudfront DNS Migration Gotchas that no one tells you about

My overall observations: It takes a lot of digging to find these gotchas and many times AWS docs leave u hanging.

I migrated from an Apache Server using htaccess all the time, over to Cloudfront...started on June 7 and it is now June 18, and I still have issues but is working for the most part. This article assumes we already have some basic DNS and Networking knowledge but is still good for the beginner too.

I suspect that many developers go the Netlify route vs AWS Cloudfront due to an assumption of AWS complexity. I am not here to bash AWS, I think it is very worthwhile to do and HIGHLY recommend migrating to a better network neighborhood such as AWS S3 and Cloudfront.

First, the 4 major gotchas

  1. All names (comma separated), not just Alternate names must be in the Alternate Domain Name Setting in the General tab on Cloudfront which is marked as 'optional'. The gotcha is that it says optional but won’t work till you fill it in. Doesn't 'optional' mean optional?

  2. Cors issue is in 2 places, S3 and Cloudfront, don’t forget to enable cors also in Cloudfront. Gotcha is enabling only in s3 while forgetting to do it in Cloudfront. Otherwise, pre-migration links used on the outside will show Cors issue in the Dev Tools view. Go to the Behavior tab in Cloudfront and, under Origin Request policy, put in CORS-CustomOrigin and under Response headers policy add Cors with preflight and security.....

Note that both these options are marked as optional (the gotcha here) but should not be skipped or CORS will only be enabled in S3 and not Cloudfront. Don't forget to press Save Changes.

3. Using public DNS other than route53 does not migrate cleanly due to URL forwarding issues. Route53 is only 50 cents per month to host DNS on AWS Route53 vs GoDaddy or other DNS providers, and IT IS WELL WORTH IT. I was thrilled when I first found out that you don’t need route53 but ended up using it for a clean migration. The main issue was that my old links stopped working, they did not forward to my new page until I went back to using Route53. The thing about Route53 vs GoDaddy is that Route53 allows an A record to point to an alias and GoDaddy only allows an actual IP address. I think this is 301ing the right way and getting masking without really doing illegal masking.

4. This is not exactly a gotcha but could be. This is more of an FYI. Upon migration, pages now instantly live in the S3 bucket behind a Cloudfront distribution so be sure to change ur canonicals right away on each page with the Cloudfront path and submit a new sitemap to Google Search Console if u are on it. Also, any links on your homepage need the Cloudfront path changed right away. This applies to both buttons and href links. Even though many people say a sitemap is not necessary for a small site, I say submit a new sitemap for migration because if you are registered with Google search console this becomes a must-do.

Also, be sure robots.txt is used and points to ur sitemap so web crawlers can find you.
Also, another FYI is that Cloudfront seamlessly does a proper migration which includes your old name on the address bar. But if you do not use Route53 then your new Cloudfront address will appear and not your web name. In my case my old links onFacebook and Twitter stopped working because of this, they timed out. When I changed back to Route53 this fixed all my old link issues. To me, I could not have a situation where all my old links to my new home page now fail.

Note, since writing this the first time, I found that this #4 gotcha about the canonicals and file paths is not true and I was given bad advice, please see my gotchas in part 2.

More gotchas below - but many are just FYIs

  1. Need to re-seo everything because files now live in a new place, GSC still favors my now defunct files and sitemap. When I request indexing a duplicate issue arises where Google chooses old canonical over user-declared canonical....... still pending...will have an update on this soon. My new finding is that we do not have to re-seo, please see my updates in the gotchas part 2.

  2. Beware that s3 requires the html extension until u tell it otherwise.....still pending
    My old pages had no file extension as directed by htaccess file in the Apache server. Now if user clicks on an old page link from Twitter or FB they get my 404 page which I finally revamped to accommodate this. Eventually, I need to try out the extension s3 option, but for now, it is not a priority.


    PS My index page old link works thank God\Update.. just rename the page in S3 and remove html extension and all old links will work now.

Since writing this article the first time around, I found out that S3 files without extensions need special attention in Cloudfront. Removing the extension as I described above is not working out the way I expected. See Part 3 of this series to find out why.

  1. 304 forwarding not 301, index pg only....impact to SEO..but recently requesting indexing on the home page returned a 200 response and not the expected 304. This is an SEO issue because if Google sees the 304 response they assume there are no updates to the page and thus will not index the page.
Update - found out this is not true, see my gotchas part 2 about this.

:::

\
  1. Best to use an AWS SSL Certificate and not your own cert. I think that an AWS SSL cert does a proper TLS handshake with Cloudfront and I must say that I do not have direct experience with this but am using my common sense. I always used a free one from Let Encrypt so with special certs I am not experienced but I suspect it can be a can of worms. So just get a new cert and don’t try to move ur old one over. Not tested, just using my common sense here. PS. You will need a new SSL Certificate anyway because now pages reside at a new address—the CloudFront Endpoint.

  2. Remember to have a separate S3 bucket for a www version. All the web crawlers take into consideration your
    a.https://www
    b.http://www
    c.__https://nakeddomain
    d.
    http://naked__domain
    But I presume everyone already knows this part. Do this part only if u care about SEO. Perhaps it is best practice to do this otherwise site health gets dinged by Semrush, Ahrefs, and GSC. All 4 DNS versions need to resolve to one HTTP version (best practice).

  3. Remember to have both the A record and AAAA record pointing to CloudFront distribution in route53 and not the IP address of the new server. We use Alias for this, first, we have our name and then for value press alias, then press CloudFront distribution then hopefully ur endpoint pops up, and if it does not manually enter the new endpoint. Mine was dkrs578h9fllv.cloudfront.net which was assigned to me.

  4. PS, don’t forget to enable Brotli compression in Cloudfront. Go to the Behaviors tab in Cloudfront and be sure that “compress objects automatically” is set to yes, then go down and choose Cache Policy and Origin Request. Do not choose Legacy Cache Settings unless u need to fix ur TTL. Under Cache Policy choose CachingOptimized recommended for S3 and this will enable Brotli too. Your site will be blazing fast when deployed. Just fyi u can go to dev tools and click the network tab to see if response headers say Brotli is enabled. In addition to clicking the network tab, you also need to double-click the actual HTML page after u refresh ur site with dev tools open. This is how to see the response header. Also you can find the age of the cached page which I think is very cool. 86400 seconds is the default so when this 24-hour period passes your page will recache and the afe will be 50 seconds approx and counting. For each refresh, you can see the new age. Note 304 response code happens here because it is returning a cached page and no changes occur for 24 hours.

  5. Cloudfront serves outdated content for 24 hours using a cached version of the page. This is a pain in the beginning because, for a few days, we are constantly updating our files, but the solution is not that bad. First of all, I could not get Invalidations to work properly so had to disable my cache only for a couple of days during the dev process. Also, note that S3 files cannot be edited. You first need to delete them and then upload a new version of the page or just overwrite the existing file with a new one. Then I reset my cache to 24 hours when I was done. Note that I originally reset the default cache TTL to 3600 seconds but still needed to wait one hour to see my changes.

  6. Note on URL masking, don’t do this yourself from GoDaddy, (u can do URL forwarding with masking) Don’t do this!! Cloudfront will do it the correct way when u are on Route53. My lesson learned here was to just use route53 for my DNS and not my DNS provider which is Name.com

  7. Using WAF on Cloudfront distribution incurs charges. Will work fine without WAF even though a message comes up saying that WAF is recommended.

  8. Remember that you need 2 Cloudfront distributions one for the main index page and the other for your www page originally set up as a separate s3 bucket. Beware of enabling WAF, we easily live without it. Check your AWS cost explorer to find out current charges. I am not sure if u are still in the 12-month free tier if these charges apply.

  9. It can take up to 48 hours for CloudFront to fully propagate so be patient. Also, the new SSL cert needs time to be cached too.

Conclusion - not sure if Netlify has any of these gotchas. Would be interesting to get some feedback on this. ALSO, FYI, I am not an expert at any of this just a casual ordinary Joe making my observations. But certainly would welcome feedback in the comments.

Part 2 - AWS Gotchas: Corrections from Part 1 Plus New Gotchas, A MUST Read

Last week, I decided to host my website using S3 and Cloudfront, and because I am not an expert I ran into lots of difficulties. While Googling, I relied on some posts that turned out to be bad advice. The below advice only applies if you do not change your website name and are just migrating over from a different web host. Also, it seems like much of my advice below is SEO-related. While many folks abhor SEO, my opinion is that it is a necessary evil and best practice only if u want to get noticed out there (being on SERP page 6 is better than not being noticed at all). But if your web page is for internal corporate use only then no need to even care about SEO.

Firstly let me reiterate that gotcha #1 from my first post is the major issue I faced. It took days for me to figure this one out. Your Cloudfront Distribution will not even resolve if you do not follow gotcha #1.

So, while my migration was not working for days I created some more problems for myself that I had to correct after deployment.

Correction #1 - scratch what I tell you in gotcha #4 because it is all wrong. Yes, your files are now in a new path but it is ONLY a file path from Cloudfront to S3. There is ABSOLUTELY NO NEED to redo all your links to this file path or your canonicals. KEEP THEM AS IS using the legal path that you set up in Route53 and the S3 Bucket name. This is the legal path. Also, it is imperative for your sitemap to not have any references to the S3 file path, and the same for robots.txt. Bottom Line, make NO references to the Cloudfront S3 path anywhere.

Correction #2 - gotcha #1 is also incorrect, you do not need to re-seo everything. Your legal site name is still the same so don't create a new property in Google Search Console with the new S3 path name and start to request indexing as this will only create duplicate and confusing content out there on the web.

Correction #3 - be consistent about your page names. If they did not have an HTML extension last time be sure this is true this time otherwise users will get a 404 error. My confusion was that by default all pages in S3 have an HTML extension. If you were on an old Apache server where your htaccess file stripped the HTML extension, then this advice is for you. It is easy to just check the box in the s3 bucket next to your file, click actions, then rename and just remove the HTML extension, and don't forget to re-save.

Correction #4 - refers to more parts of gotcha #4 stating that the CDN returns a 304 code vs a 301. While this is true and in some cases, Google Search Console will not re-index, here is what is really happening. When Google reindexes a page you can find that the status code of that page is 200 only for the indexing. This is because your robots.txt and sitemap are actually instructing the Google Bot to go to the source and not to the cached page over at the CDN.

When this happens, your page will reindex. Here is the rub, much advice out there tells you that for small sites you do not need robots or a sitemap but I say ignore this and do both anyway only if you want web crawlers out there to notice you. Don't forget to do the same over at Bing Webmaster Tools. My suspicion is that if you don't have robots or a sitemap then Google will then go out to ur 304 version of the page at the CDN and thus will not index the page. This is because Google Bot has no sitemap or robots to tell them what to do so Google has to guess. I think I am correct with this presumption but have not fully tested and professional feedback is a bit hard to come by these days.

That is all for my corrections!

During my travels, I found some new Gotchas that no one tells u about.

  1. DO NOT create a sitemap using the CloudFront s3 file path name.

  2. DO NOT create a new property at Google Search Console.

  3. Even though all your pre-migration files are now deleted it seems like there is no place for a sitemap. Each Google property has a designated location to submit a sitemap.

  4. DO NOT change any of ur old canonicals as AWS does its magic for you. If you try to change any canonicals (I use Screaming Frog to test crawl) Google will pick the legal canonical with the path name that matches your legal name, registered at Route53, which is also your bucket name. It will not choose the user-selected canonical if it is wrong and thus will not index your page.

  5. This one is kind of insidious. The default time for the CloudFront CDN to update the cache is 24 hours. Be advised that sitemaps and Javascript files will work on a different schedule and update at different times. Don't be too hasty when checking all your edits. Be patient. Cloudfront does warn us on Route53 that TTL is 48 hours. In my case some of my javascript was not working but 12 hours later everything was fine.

That’s all for part 2 folks and THANKS for reading this far!

Part 3 - Enabling AWS S3 to Behave more like a Web Server

Cloudfront Functions (as of Nov 2021) can be used to mimic Apache htaccess behavior with Cloudfront in front of S3 providing more server-like behavior for our AWS configuration. I have 4 things in mind.

1. We want to deal with Pretty URLs or files without extensions using redirects

2. We want to enable default index.html behavior for our subfolders

3. We want to fiddle around with our response security headers and

4. We want to do some URL rewriting.

In this article, we will cover 1 through 3.

Remember back in the day, before HTTPs (circa 2018), when hosting in S3 was as simple as enabling web hosting right in the bucket? Well, AWS s3 is not really meant to be a webserver so now, adding Cloudfront in front of s3 attempts to get us closer but more is still needed. And with ‘more’, I get really close to mimicking Apache HTTP-like behavior. Beware that by adding CloudFront we do experience some challenges as some traditional S3 behavior ceases to work. CloudFront overrides traditional programming thus getting in the way.

Part 1 - Pretty URls and the Need for Redirects

One persistent issue I encountered was how to handle HTML pages without extensions. In my previous Apache setup, I had some files with HTML extensions and others without. This wasn't a problem; I just added a couple of lines to my .htaccess file, and it resolved the extensionless file issue. However, when dealing with S3, it's a different story. S3 insists on keeping the .html extension for all files.

My initial approach was to upload HTML files and then rename them to remove the extension. I also tried using the metadata section in the S3 record to redirect my extensionless files. At one point, I even attempted to use the AWS CLI to upload my files. But after wrestling with this issue, I realized I didn't even need these extensionless files.

Here's the twist: it turns out that Cloudfront doesn't adhere to the traditional S3 manipulation rules. In essence, Cloudfront throws a wrench into things. This is a major "gotcha" because this crucial piece of information isn't readily shared. It means that no matter how well you study AWS, it may not work as expected because Cloudfront introduces exceptions. It's a significant source of frustration and can lead to endless Google research. Now that I've learned about this "not honoring" issue, here's how I've adapted my strategy to tackle it.

After overthinking, I found that the solution is simply to include a 301 redirect instruction, in a Cloudfront function, to redirect to a newly created HTML file. Using this approach also protects our SEO link equity, also known as link juice. We are simply asking if the URL request is the old file without an extension. No need to create this file, just ask in our request header. If the condition is true then we redirect the user to the HTML version of the file. We should thus create the HTML version of the file and also not forget to change our sitemap and canonicals to agree. Any previous links to the no-extension syntax will continue to work. This is key.

PS: be sure that all pages agree with the sitemap because some have the extension and some don't. Also don't forget to change ur canonicals to include the .html after ur migration.

This is what it looks like:

function handler(event) {
    var request = event.request;
    var uri = request.uri;
    var requestURI = event.request.uri

    var newurl5 = `https://howtolearnjava.com/JDBC.html`;
    var newurl6 = `https://howtolearnjava.com/Java-Servlet.html`;


   //the uri is everything after the .com slash in our path including the slash and any subfolders


    if (requestURI === "/JDBC") {
            var response = {
                statusCode: 301,
                statusDescription: 'Found',
                headers:
                    { "location": { "value": newurl5 } }
                }
            return response;
        }

    if (requestURI === "/Java-Servlet") {
            var response = {
                statusCode: 301,
                statusDescription: 'Found',
                headers:
                    { "location": { "value": newurl6 } }
                }
            return response;
        }        



//Part 2 - below code will enable sub folders to host index.html as default
    // Check whether the URI is missing a file name. If so then it is a subfolder and we then concat and index.html file so when the user types in the subfolder, by default he gets the index.


    if (uri.endsWith('/')) {
        request.uri += 'index.html';
    } 
        // Check whether the URI is missing a file extension.
    else if (!uri.includes('.')) {
        request.uri += '/index.html';
    }
    return request;
}

Open AWS Cloudfront, click functions in the left menu, name and build the function, test it, publish it then associate it to a Cloudfront distribution.

Part 2 subfolders

Since S3 file storage is just simple storage, S3 has no knowledge of a subfolder or its index file. Luckily when we first encounter web enabling an S3 bucket, this part gives our index file default-like behavior. But generally, nothing is ever discussed about subfolders. In Apache, we make use of an htaccess file in each subfolder to deal with default indexes. But S3 and Cloudfront do not have an htaccess file so we hereby use a Cloudfront Function to mimic htaccess.

In our above example, the CloudFront Function is like a listener listening for HTTP requests (similar to Apache HTTP). Notice that when the user request comes in at the CDN edge our program asks if it is a subfolder and if it is then concat an index file to the URI. Voila, we have the behavior we want.

Part 3 Response Security Headers

In the 2 examples above, we are dealing with a request header but now we need to write a separate function to handle a response header. What we are doing here is providing our CloudFront config with some additional headers for security purposes. Normally, this is done once again in our Apache htaccess file but here we do it like this below. A separate function is needed because when we test it we want to check the box that says response and also want to do the same when we associate our published file. Likewise, with the request function, we check the box that says request.

Here we go:

function handler(event) {
     var response = event.response;
     var headers = response.headers;


   // config security headers

headers['strict-transport-security'] = { value: 'max-age=63072000; includeSubdomains; preload'}; 


headers['content-security-policy'] = { value: "script-src-elem 'self' 'unsafe-eval' https://javasqlweb.org/js_new.js 'unsafe-inline' https://ipinfo.io/ 'unsafe-inline' https://www.googletagmanager.com/ 'unsafe-inline'; style-src 'self' 'unsafe-inline"};


headers['x-content-type-options'] = { value: 'nosniff'}; 


headers['x-xss-protection'] = {value: '1; mode=block'};


headers['referrer-policy'] = {value: 'no-referrer-when-downgrade'};


headers['x-frame-options'] = {value: 'DENY'};


return response;
}

The content security policy header is the challenging one. In my case, I have a javascript file called js_new.js to enable and I also have 2 other js connections one to ipinfo and the other one to google tag manager. My last instruction for styles enables inline styles. It is best to google all these headers to gain a full understanding of their importance.

For thorough pre and post-testing of all the adjustments I mentioned, I rely on Screaming Frog. Without the appropriate security headers, it may raise concerns about their absence or incorrect implementation. Screaming Frog is undeniably a powerful tool to use regularly for website testing, helping you identify and rectify various issues.

It's also worthwhile to take some time to study these headers and understand what they signify. Another useful approach is to delve into the network tab of Chrome Dev Tools (accessed with ctrl-shift-j). From there, you can click on your page and inspect the response headers to gain insights into what's being returned.

Final Thoughts

With these minor refinements, we gradually coax S3 into behaving more like a genuine web server, even though it fundamentally serves as simple storage. Similar to how we enhance Apache using .htaccess, we enhance S3 with a Cloudfront distribution and a Cloudfront function to implement these tweaks. This has certainly improved my confidence in using S3 to host my website.

Also published here.


Written by rickdelpo | Retired Data Engineer helping Wannabe Devs. Recently ditched Java in favor of Javascript and Serverless Computing.
Published by HackerNoon on 2023/10/11