• Initial commit

    • From repository rileyjshaw/plop

    plop (working title)

    Scriptable text expansion microtool.

    Status: pre-alpha / prototype / I thought of this an hour ago

    Usage:

    # Only works on MacOS for now.
    git clone https://github.com/rileyjshaw/plop
    cd plop
    npm i
    npm start
    

    While going about your business, hit Cmd + Shift + Space. Write some valid JavaScript. The result will be typed into whatever program you have open.

    Note:

    \n triggers a simulated Enter press. Please be kind, don't spam your group chat.

  • Add pagination to the blog!

    • From repository rileyjshaw/rileyjshaw-new

    This is an exciting commit for the new site. It collects blog posts from multiple sources (for now: http://rileyjshaw.commit--blog.com, https://sfpc.rileyjshaw.com, and https://rileyjshaw.com/blog), orders them by date, and paginates them into a nice on-site list.

    In the past, my blog has always seemed stale and outdated. By allowing it to gobble content from across the web, I can point it to whatever platform I'm currently publishing on!

    This commit is going to show up as a post on my blog, which feels rather meta.

  • Bootstrap an independent data scraper

    • From repository rileyjshaw/rileyjshaw-new

    Project scraper

    The projects on my site are automatically scraped and formatted at publish time using the scripts in this directory. Read more about my reasoning below, or skip to the directory structure.

    Why?

    Gatsby's source and transformer plugins are powerful, and I used them in the initial development of this site. I eventually decided that separating my collection process would be good for flexibility, control, and offline work.

    Flexibility

    GraphQL's filters and transforms are powerful, and Gatsby's APIs add more options for how data is fetched, cached, and transformed. However, complicated or non-standard data transforms and sanitization are much easier outside of Gatsby's ecosystem. For instance, the API starts to feel clunky for one-off treatment of specific content nodes,

  • Archive pre-2019 Heroku site; Update README.md

    • From repository rileyjshaw/xoxo-bingo

    Excerpt from the new README:

    ## timeline
    2015: first bingo! [eli](https://twitter.com/veryeli) and i used the attendee
    directory to generate a unique card for everyone (twitter login kept it private
    🔒). squares on your card were other attendees - if you met someone on your
    card you got to check it off. we made it cuz we’re shy. most of it is in the
    `pre-2019` folder!
    
    2016: we made the cards prettier by pulling in people’s twitter photos and
    doing imgmagick to them 🔮
    
    2017: no xoxo, no bingo… missed u all
    
    2018: xoxo was in the midst of changing their infrastructure, so i lost access
    to the attendee directory. [hannah](https://twitter.com/herlifeinpixels),
    [jason](https://twitter.com/justsomeguy) and i met in a cafe before the kickoff
    ceremony and designed a static version with input from the community. hannah
    and jason made 25 icons in like two minutes, it was incredible!!!
    
    2019: i've been too cheap to get https://xoxo.bingo in previous years, but
    [andy](https://twitter.com/andymcmillan) noticed a thread on slack and hooked
    us up! thx andy.
    
  • Firehose: proof of concept

    • From repository rileyjshaw/rileyjshaw-new

    I'm experimenting with auto-generating nodes for https://rileyjshaw.com/lab from a variety of data sources. This project may eventually replace https://rileyjshaw.com.

    This is the initial commit, completed quickly as a proof of concept. There's nothing much to show, but I want to deploy ASAP so I can test the full pipeline.

    So far, everything has worked! Data from a variety of sources is already appearing on my local server. To reproduce:

    So far, I'm surfacing data from:

    Setting this up was EASY, which makes me excited for the future of this experiment :)