1

Change the directory of Bootstrap from '_assets' to static 'assets'.

This commit is contained in:
vonavi
2014-07-01 14:12:54 +03:00
parent e9b7df422d
commit d92ccf2af6
226 changed files with 0 additions and 0 deletions

View File

@@ -0,0 +1,100 @@
## What does `s3_cache.py` do?
### In general
`s3_cache.py` maintains a cache, stored in an Amazon S3 (Simple Storage Service) bucket, of a given directory whose contents are considered non-critical and are completely & solely determined by (and should be able to be regenerated from) a single given file.
The SHA-256 hash of the single file is used as the key for the cache. The directory is stored as a gzipped tarball.
All the tarballs are stored in S3's Reduced Redundancy Storage (RRS) storage class, since this is cheaper and the data is non-critical.
`s3_cache.py` itself never deletes cache entries; deletion should either be done manually or using automatic S3 lifecycle rules on the bucket.
Similar to git, `s3_cache.py` makes the assumption that [SHA-256 will effectively never have a collision](http://stackoverflow.com/questions/4014090/is-it-safe-to-ignore-the-possibility-of-sha-collisions-in-practice).
### For Bootstrap specifically
`s3_cache.py` is used to cache the npm packages that our Grunt tasks depend on and the RubyGems that Jekyll depends on. (Jekyll is needed to compile our docs to HTML so that we can run them thru an HTML5 validator.)
For npm, the `node_modules` directory is cached based on our `npm-shrinkwrap.canonical.json` file.
For RubyGems, the `gemdir` of the current RVM-selected Ruby is cached based on the `pseudo_Gemfile.lock` file generated by our Travis build script.
`pseudo_Gemfile.lock` contains the versions of Ruby and Jekyll that we're using (read our `.travis.yml` for details).
## Why is `s3_cache.py` necessary?
`s3_cache.py` is used to speed up Bootstrap's Travis builds. Installing npm packages and RubyGems used to take up a significant fraction of our total build times. Also, at the time that `s3_cache.py` was written, npm was occasionally unreliable.
Travis does offer built-in caching on their paid plans, but this do-it-ourselves S3 solution is significantly cheaper since we only need caching and not Travis' other paid features.
## Setup
### Overview
1. Create an Amazon Web Services (AWS) account.
2. Create an Identity & Access Management (IAM) user, and note their credentials.
3. Create an S3 bucket.
4. Set permissions on the bucket to grant the user read+write access.
5. Set the user credentials as secure Travis environment variables.
### In detail
1. Create an AWS account.
2. Login to the [AWS Management Console](https://console.aws.amazon.com).
3. Go to the IAM Management Console.
4. Create a new user (named e.g. `travis-ci`) and generate an access key for them. Note both the Access Key ID and the Secret Access Key.
5. Note the user's ARN (Amazon Resource Name), which can be found in the "Summary" tab of the user browser. This will be of the form: `arn:aws:iam::XXXXXXXXXXXXXX:user/the-username-goes-here`
6. Note the user's access key, which can be found in the "Security Credentials" tab of the user browser.
7. Go to the S3 Management Console.
8. Create a new bucket. For a non-publicly-accessible bucket (like Bootstrap uses), it's recommended that the bucket name be random to increase security. On most *nix machines, you can easily generate a random UUID to use as the bucket name using Python:
```bash
python -c "import uuid; print(uuid.uuid4())"
```
9. Determine and note what your bucket's ARN is. The ARN for an S3 bucket is of the form: `arn:aws:s3:::the-bucket-name-goes-here`
10. In the bucket's Properties pane, in the "Permissions" section, click the "Edit bucket policy" button.
11. Input and submit an IAM Policy that grants the user at least read+write rights to the bucket. AWS has a policy generator and some examples to help with crafting the policy. Here's the policy that Bootstrap uses, with the sensitive bits censored:
```json
{
"Version": "2012-10-17",
"Id": "PolicyTravisReadWriteNoAdmin",
"Statement": [
{
"Sid": "StmtXXXXXXXXXXXXXX",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXXXXXX:user/travis-ci"
},
"Action": [
"s3:AbortMultipartUpload",
"s3:GetObjectVersion",
"s3:ListBucket",
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:GetObject",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
"arn:aws:s3:::XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/*"
]
}
]
}
```
12. If you want deletion from the cache to be done automatically based on age (like Bootstrap does): In the bucket's Properties pane, in the "Lifecycle" section, add a rule to expire/delete files based on creation date.
13. Install the [`travis` RubyGem](https://github.com/travis-ci/travis): `gem install travis`
14. Encrypt the environment variables:
```bash
travis encrypt --repo twbs/bootstrap "AWS_ACCESS_KEY_ID=XXX"
travis encrypt --repo twbs/bootstrap "AWS_SECRET_ACCESS_KEY=XXX"
travis encrypt --repo twbs/bootstrap "TWBS_S3_BUCKET=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
```
14. Add the resulting secure environment variables to `.travis.yml`.
## Usage
Read `s3_cache.py`'s source code and Bootstrap's `.travis.yml` for how to invoke and make use of `s3_cache.py`.

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1 @@
boto==2.20.0

107
assets/vendor/bootstrap/test-infra/s3_cache.py vendored Executable file
View File

@@ -0,0 +1,107 @@
#!/usr/bin/env python2.7
from __future__ import absolute_import, unicode_literals, print_function, division
from sys import argv
from os import environ, stat, remove as _delete_file
from os.path import isfile, dirname, basename, abspath
from hashlib import sha256
from subprocess import check_call as run
from boto.s3.connection import S3Connection
from boto.s3.key import Key
from boto.exception import S3ResponseError
NEED_TO_UPLOAD_MARKER = '.need-to-upload'
BYTES_PER_MB = 1024 * 1024
try:
BUCKET_NAME = environ['TWBS_S3_BUCKET']
except KeyError:
raise SystemExit("TWBS_S3_BUCKET environment variable not set!")
def _sha256_of_file(filename):
hasher = sha256()
with open(filename, 'rb') as input_file:
hasher.update(input_file.read())
file_hash = hasher.hexdigest()
print('sha256({}) = {}'.format(filename, file_hash))
return file_hash
def _delete_file_quietly(filename):
try:
_delete_file(filename)
except (OSError, IOError):
pass
def _tarball_size(directory):
kib = stat(_tarball_filename_for(directory)).st_size // BYTES_PER_MB
return "{} MiB".format(kib)
def _tarball_filename_for(directory):
return abspath('./{}.tar.gz'.format(basename(directory)))
def _create_tarball(directory):
print("Creating tarball of {}...".format(directory))
run(['tar', '-czf', _tarball_filename_for(directory), '-C', dirname(directory), basename(directory)])
def _extract_tarball(directory):
print("Extracting tarball of {}...".format(directory))
run(['tar', '-xzf', _tarball_filename_for(directory), '-C', dirname(directory)])
def download(directory):
_delete_file_quietly(NEED_TO_UPLOAD_MARKER)
try:
print("Downloading {} tarball from S3...".format(friendly_name))
key.get_contents_to_filename(_tarball_filename_for(directory))
except S3ResponseError as err:
open(NEED_TO_UPLOAD_MARKER, 'a').close()
print(err)
raise SystemExit("Cached {} download failed!".format(friendly_name))
print("Downloaded {}.".format(_tarball_size(directory)))
_extract_tarball(directory)
print("{} successfully installed from cache.".format(friendly_name))
def upload(directory):
_create_tarball(directory)
print("Uploading {} tarball to S3... ({})".format(friendly_name, _tarball_size(directory)))
key.set_contents_from_filename(_tarball_filename_for(directory))
print("{} cache successfully updated.".format(friendly_name))
_delete_file_quietly(NEED_TO_UPLOAD_MARKER)
if __name__ == '__main__':
# Uses environment variables:
# AWS_ACCESS_KEY_ID -- AWS Access Key ID
# AWS_SECRET_ACCESS_KEY -- AWS Secret Access Key
argv.pop(0)
if len(argv) != 4:
raise SystemExit("USAGE: s3_cache.py <download | upload> <friendly name> <dependencies file> <directory>")
mode, friendly_name, dependencies_file, directory = argv
conn = S3Connection()
bucket = conn.lookup(BUCKET_NAME, validate=False)
if bucket is None:
raise SystemExit("Could not access bucket!")
dependencies_file_hash = _sha256_of_file(dependencies_file)
key = Key(bucket, dependencies_file_hash)
key.storage_class = 'REDUCED_REDUNDANCY'
if mode == 'download':
download(directory)
elif mode == 'upload':
if isfile(NEED_TO_UPLOAD_MARKER): # FIXME
upload(directory)
else:
print("No need to upload anything.")
else:
raise SystemExit("Unrecognized mode {!r}".format(mode))

View File

@@ -0,0 +1,83 @@
[
# Docs: https://saucelabs.com/docs/platforms/webdriver
{
browserName: "safari",
platform: "OS X 10.9"
},
# {
# browserName: "googlechrome",
# platform: "OS X 10.9",
# version: "31"
# },
{
browserName: "firefox",
platform: "OS X 10.9"
},
# Mac Opera not currently supported by Sauce Labs
{
browserName: "internet explorer",
version: "11",
platform: "Windows 8.1"
},
{
browserName: "internet explorer",
version: "10",
platform: "Windows 8"
},
# {
# browserName: "internet explorer",
# version: "9",
# platform: "Windows 7"
# },
# {
# browserName: "internet explorer",
# version: "8",
# platform: "Windows 7"
# },
# { # Unofficial
# browserName: "internet explorer",
# version: "7",
# platform: "Windows XP"
# },
{
browserName: "googlechrome",
platform: "Windows 8.1"
},
{
browserName: "firefox",
platform: "Windows 8.1"
},
# Win Opera 15+ not currently supported by Sauce Labs
{
browserName: "iphone",
platform: "OS X 10.9",
version: "7"
},
# iOS Chrome not currently supported by Sauce Labs
# Linux (unofficial)
{
browserName: "googlechrome",
platform: "Linux"
},
{
browserName: "firefox",
platform: "Linux"
}
# Android Chrome not currently supported by Sauce Labs
# { # Android Browser (super-unofficial)
# browserName: "android",
# version: "4.0",
# platform: "Linux"
# }
]

View File

@@ -0,0 +1,4 @@
#!/bin/bash
cp test-infra/npm-shrinkwrap.canonical.json npm-shrinkwrap.json
npm install
rm npm-shrinkwrap.json