CascadiaJS 2021 Overview (Seattle)
Written on: November 13, 2021
Takeaways
Really well organized conference with many different ways of following along and participating. This year was a hybrid in-person/virtual event where the in-person portion happened at MoPop and the virtual portion was split between discord, gather and their own streaming platform.
The talks were a good variety of personal projects, advice for day-to-day development and more general discussions around various JS related topics. Good opportunity for exposure to various ways to use JS and the flexibility it offers.
I appreciated that they started off every day recognizing the native land that the conference was being hosted on and they gave thanks/recognition to the local tribes. They also used proceeds and donations from the conference to buy tickets for those who could not afford to attend.
Resources
Collection of various resources shared during the presentation. Not an exhaustive list but rather sorted by usefulness in day-to-day tasks.
-
All of the talks will be uploaded to this youtube channel once they are ready.
-
Tool for verifying npm packages (still in beta)
-
TypeScript challenges for practice
-
React hook encapsulation strategies by Kyle Shevlin
Presentations
Day 1 - 11/03/21
Responsive Components: Present & Future - James Steinbach - 9AM
Focus of talk was around how we currently use CSS and what’s coming down the pipeline. Informative but not actionable unless you’re willing to use the canary version of Chrome. Topics will start to become more relevant in 2022 however. Some ways we currently lay things out:
-
“The Holy Albatross” approach uses
flex-basis
andcalc()
to toggle column and row layout -
Heydon Pickering came up with it, more here:
- the-flexbox-holy-albatross-reincarnated and here: the-flexbox-holy-albatross
- Pretty interesting and quick reads
-
CSS Grid which adapts column count for child width
-
We can clamp for font-sizes but it’s based on viewport size not component size. Not a very common approach either.
-
The ResizeObserver uses JS API to watch element size and add/remove classes but it uses JS https://developer.mozilla.org/en-US/docs/Web/API/ResizeObserver
ro.observe(el)
ro.unobserve(el)
So, what’s next?
-
We had selector queries around 2011, media queries around 2013, container based responsive web design with element queries around 2015 and we’re now moving towards container queries.
-
How will they work? (still in progress)
- We will create a container via
@container someName (min-width: 480px)
- It will disable content based sizing (spec still in WIP)
- Then we style the children
- We can also name the containers (optional) via:
container-name: main
@container main (min-width: 480px)
- We will create a container via
-
If interested in using there is a polyfill called
cqfill
So you think you know window.open
- Jessica Campos - 945AM
Focus of the talk was on the window object and that it has its own network so if you are working in a project where you have to be aware of open tabs and a window and/or redirecting users between those two tabs while keeping data intact (or refreshing it), it is not straightforward.
-
Different networks of tabs can be created in a few ways and using states to control what’s displaying in which tab requires testing of behaviors
-
There is some funkiness with using
e.preventDefault()
while using cmd to open new tabs -
Seems like there are some weird corners but nothing that stood out that would be relevant to day to day work unless you are in project which does this
-
Helpful to reload existing tabs instead of creating new ones if you need to
Using Lighthouse in the real world - Leonardo Faria - 1045AM
Focus of talk was on how performance optimizations can be pursued and automated. Not much different from how we approach optimizations now but he did share additional tooling if a client wanted to go more in depth.
-
Lighthouse can be run it multiple ways, not just through Page Speed Insights via the console which is the most common way we’ve used it
-
Report can be formatted as JSON which helps us automate things down the road
-
The final report is calculated based on several factors such as:
- Web Vitals (Core Web Vitals)
- LCP (Largest Contentful Paint) - how long it takes for the largest element to appear
- FID (First Input Delay) - how long input execution takes to fire
- CLS (Cumulative Layout Shift) - unexpected shifts of layout content
- Lighthouse report allows you to filter by any one of these factors
-
web.dev website has resources for optimizing web vitals
-
You’ll hear of something called Lab vs. Field data
- Lighthouse performs test in controlled environment
- Data is helpful for isolating and fixing specific issues
- Field data reports based on user traffic
- Difficult for identifying specific issues but can surface larger trends
- You want to combine field data and lab data to improve performance
- Lighthouse performs test in controlled environment
-
HAR file is a JSON formatted file with the browsers interaction with a page
- Contains things like cookies, headers, response times
- Can be visualized and exported in performance tab of dev tools
- You can use http://www.softwareishard.com/har/viewer to view HAR files
-
How do we use these tools in the wild?
- Critical user journeys are important product areas to monitor
- You can run tests after deployments by parsing the lighthouse scores, asset file sizes, web vitals numbers to a DB
- Then you can use these to create custom reports and flags based on values. If something drops drastically you may want to investigate.
-
Some automated lab test tools:
- Lighthouse Monitor: https://github.com/Verivox/lighthouse-monitor
- Calibre: https://calibreapp.com/features/lighthouse
-
How do these tools help us?
- Lighthouse reports help identify low hanging fruit vs. architecture problems
- Help identify feature not used very often but used by everyone vs. features used often but by specific audiences
-
Field data collection tools:
- Application Performance Monitoring
- New Relic, Datadog, Elastic
- Chrome user experience report
- PSI, Search Console, CrUX dashboard
- Custom solution to user web-vitals JS (not recommended)
-
For issues which aren't clear, use the scientific method
- Use feature flags if unsure about change impact. LaunchDarkly is a tool for managing feature flags: https://launchdarkly.com/
-
When building your own automated tools think about:
- Need to scope the solution
- Identify risks and timeline
- User and business benefits
- Create FAQs for non-tech stakeholders to reduce misunderstandings
-
Using an average of multiple tests runs can help get a more accurate score
Creating a Culture of FrontEnd Performance - Andrew Hao - 1110AM
Focus of talk was interesting but by the end of it felt like it was a manager asking you to do more in your extra time and not get paid for it. We as contractors don’t typically fall into these roles where we would be the evangelists for starting a culture in the workspace. However, some of his points can be translated into our day to day work. Points like:
-
Shifting the culture through community of practice
- This is something we consciously try to implement when we land on projects or teams where things are not going good to the point where consultants had to be brought in
- We also do this on new projects by practicing what we preach in regards to code quality, PR structure, test coverage expectations, etc.
-
Find allies to help you create the culture
- We should cultivate the relationships with the devs who practice and support good coding practices
-
Some Tools for monitoring performance (at Lyft):
- Lighthouse runner which helps track long term trends and accountability
- Bundle size reporter to fight bloat before it merges
- Core Web Vitals plugin to help track modern performance metrics
- Fleetwide performance dashboards to make reports visible to everyone
- Tools are important because they help enforce the culture of performance
- Documentation can also help others get involved
-
Extends to other aspects rather than just performance: accessibility, testing, tooling, etc.
Enter the Sandbox: JS on WebAssembly - Aaron turner - 1135AM
Focus of talk was on WebAssembly which is something I don’t know much about. Presenter was easy to follow and seemed passionate about the topic so it helped.
-
WebAssembly is good for running logic heavy tasks
- Offers predictable performance
- JS is compiled many times and can fall off the fast path
-
WebAssembly is very portable
- supported by all modern browsers
- runs in Node
- has standalone run times
-
AssemblyScript is TypeScript WebAssembly
-
Rust is another language that can be compiled to WebAssembly
-
emscripten is another language which turns C into WebAssembly
-
JS today can not be typed to WebAssembly
- You have to use AssemblyScript
-
WebAssembly uses linear memory which makes it easy to sandbox
-
What is WASI?
- system interface for WebAssembly
- can be implemented by standalone runtimes
- polyfills exist to run it in the browser
-
Mostly theoretical talk with some personal examples from the presenter
useEncapsulation
: Using React Sensibly - Kyle Shevlin - 4PM
The patterns presented here are something we can use in our day to day projects.
-
Making code easier to read will make it easier to change
-
Wrap
useEffect
in custom hooks and leave comments to make it more readable -
Encapsulate concerns to find the reusable parts
-
Can create custom
eslint
rules to screen for encapsulation patterns, a bit heavy handed though:eslint-plugin-use-encapsulation
-
Resource links shared by presenter:
- https://kyleshevlin.com/
- https://kyleshevlin.com/encapsulation
- https://kyleshevlin.com/use-encapsulation
- https://github.com/kyleshevlin/contrived-example-for-use-encapsulation
- https://github.com/kyleshevlin/eslint-plugin-use-encapsulation
- https://kyleshevlin.com/prefer-declarative-state-updaters
- https://kyleshevlin.com/what-is-a-tuple
- https://github.com/kyleshevlin/contrived-example-for-use-encapsulation/pull/1
- https://aheadcreative.co.uk/articles/when-to-use-react-usecallback/
Quote from the presenter:
With my encapsulation pattern, and other things I've written about on my blog, I typically end up making a state and a handlers object with methods, so i almost never need useCallback I end up using useMemo instead on the whole object of handlers that said, every div and button in a React app is a function call you shouldn't be afraid of making an inline function. if you notice a performance problem then it might be worth stabilizing the function One thing to pay attention to is are your callbacks being called on elements in THIS component or passed as props if it's props, you may want to stabilize so that you don't cause unnecessary re-renders downstream in children
Day 2 - 11/04/21
Hands-free Coding with Gaze Control in JS - Charlie Gerard - 9AM
Focus of talk was presenting a personal project which uses only your eyes to write JS code.
-
Her current project is not prod ready but rather an experiment
-
Inspiration comes from an app called "look to speak"
- Intended for people with restricted mobility
- Uses devices' cameras to track left or right gaze and make choices until they are eliminated to end up on the one that should be spoken. AI adds a whole new dimension to this.
-
Three main tools used to build this gaze control:
tensorFlow.js
,react
androbot.js
-
Code writing by moving eyes left and right to select stuff can get really quick in combination with tools like code snippets and GitHub co-pilot
-
There are still lots of limitations to making it usable but I was really impressed with how far one person could get. Great step forward for hands free coding
Mask || No Mask? Detect Masks with ML5.JS - Lizzie Siegle - 940AM
Focus of talk was a personal project which uses AI to detect whether a video chat participant is wearing a mask or not. Pretty fun but overall a vanity project and not much for us to use day to day. Cool to see what people are doing with JS though!
-
ml5.js
is built on top oftensorFlow.js
which makes it kind of an interface totensorFlow.js
- makes it easier to use pre-trained models, generate text, make music, image recognition, etc.
-
Algorithms can be classified into two groups as to how they learn: supervised and unsupervised
-
You can have regression or classification algorithms. Regression is continuous and classification is A or B. It’s ok to use use both
Web3 for Fun and Profit - Brooklyn Zelenka - 930AM
Focus of talk was around the “future of the internet” covering some things about Web3 which is building a whole new infrastructure around how we organize, create, and use the internet. Presenter seemed very intelligent and the information was really interesting in a theoretical sense. Not much that translates to day to day work.
-
What is happening around Web3? Same things that happened before:
- Decentralization
- Non-discrimination
- bottom up design
- Universality
- Consensus
-
Current internet model is centralized. Has to do with the government being able to regulate laws
-
Decentralized version would be a hub, multiple internet nodes
-
Distributed version would be a mesh network, anyone can be provider and consumer. Some may be more connected than others.
-
The rate and volume data coming down the pipeline is only going up and it's not feasible to send it to East1 forever
- distributed networks are going to be required in order to handle the data coming through
-
Web 1.0 was for the consumer, Web 2.0 was for the creator, Web 3.0 will be for the owner (owning your data)
- You will have provable ownership of your data with digital
- unique access to a private key
- flexible, secure, anyone can generate one
- key management problem is becoming more solved (via hardware modules too, not just software)
- You will have provable ownership of your data with digital
-
We will see user managed encryption instead of DB storage of personal information
-
Adoption of user managed encryption (keys) would reduce complexity of authentication (architecture like Auth0)
- JWT which we use today can be used to encrypt other information
-
Open source at the data layer. Fun things created with it:
- www.cryptokitties.co
- stored on a blockchain (or DNS)
- Kittyrace.com
- Kotowars.com
- www.cryptokitties.co
-
Tools to get started?
Developer Experience For Internal Teams - Ian Sutherland - 1115AM
Focus of talk was around the importance of developer experience and how it can help attract talent, lead to better quality code and overall employee satisfaction. Something that isn’t focused on enough and something we can directly impact by using and enforcing standards around tools.
-
What are some good developer experiences?
- Stripe is a good example
- APIs & SDKs available
- Clear docs
- A sandbox
- GitHub is another one
- GitHub Actions is useful and easy
- Has a CLI
- Now has web based VS code
- Has codespaces
- Stripe is a good example
-
Developer Experience helps with developer flow
- It increases happiness and productivity
- Build times decrease happiness
- Typically starts informally by automating certain tasks
-
Some tools to help
- CLI Apps
- Desktop apps
- Web apps
- GitHub Actions
- Slack bots
The Fellowship of the String - Jacques Favreau - 1115AM
Focus of talk was on personal experience with a large refactor at Netflix. The presentation was done in a more storyline type of format so no real notes but lots of goodies for programming theory and approach.
-
Covers Refactoring with empathy and small bites
-
How to help make the large refactors easier
-
Using ESLint to warn/alert on unused patterns or patterns which will be deprecated soon (leave your contact info in codemod)
A Wild Typescript Safari - Daria Caraway - 2PM
Focus of talk was an intro to TypeScript and how to break down complicated Types. Good talk around how to approach looking at Typescript as a newbie. Got a bit too in depth for me at certain points but rewatching it would probably help clarify some things.
-
Generic Types can be useful for making types
- Usually detonated by single letters like
<T>
but they don't have to be type isTruthy<TypePassedIn>
- Usually detonated by single letters like
-
Union Types are used to compose other types or can be used on their own
-
Type Inference will decide what the type is based on surrounding context (if possible)
-
Mapped Type Modifiers can be applied to mapped types
- Each key in a type can be made required if they weren't before and so on
-
https://github.com/type-challenges/type-challenges for TS practice
Cartography on the Web - Derek Hurley - 230PM
Focus of talk was around a personal project using interactive maps and open source data sets. Good example of how you can use publicly available data to make interactive maps. A lot of small details go into building maps on the web, this is good intro to it.
-
Charts, graphs, tables and interactive maps help contextualize data
-
His employer is aclima.io who measures air quality, block by block on regional scales. Translate pixels and contextualize them on a map.
-
Oregon wildfires are a good example of importance of maps on the web
-
Covid 19 cases are another good example. Seeing it on a map makes the problem more real.
-
The project he built is using treeequityscore.org. It pulls information from variety of factors to assign a score to neighborhoods.
- See it at treelandia.vercel.app
-
How to create interactive maps:
-
Choose map app source (data source)
-
Prepare data for the map
-
One format is GeoJSON: geojson.org
- geometry: points, lines, coordinates
- properties: name, id, size, etc.
- Easy to build and change at runtime
- needs all data in the browser at once
- Useful for smaller or real time datasets (real time status)
-
Vector Tiles
- optimized and load in parallel as you pan and zoom
- can paint quickly and respond to a lot of information
- compressed and built ahead of time
- won't have all the data at once
- useful for large and stable data sets
-
-
Interacting with the map
-
Design decisions: which layers can be interacted with? Some might just be visual.
-
Does zoom matter? With maps you have Z dimension, not just Y and Z
-
-
-
Play to layer strengths
-
Data layers
- source of truth
- often vector tiles as they are large
-
UI layers
- data from other layers
- often GeoJSON as they are smaller
-
What does the initialization sequence looks like
- App renders
- App sends config to map
- Map loads, starts fetching data (requesting tiles)
- Map broadcasts it's loaded
- Map continues fetching data
- App runs map-dependent logic (after map is loaded)
-
-
The initialization process can result in race conditions. The map can be a black box
- Can use Promise based code to ensure map is loaded, then poll continuously
- leads to a more stable interface when filters are involved
- well suited for tile based data
-
You have to consider your data set to choose correct technique for loading map
-
Create boundaries between the map and your larger wrapper app
-
Common issue is layers not being in order and hiding other elements; flipping latitude and longitude.
JS, Cyber-Abuse and Illusion of Privacy - Garann Means - 345PM
Focus of talk was about unforeseen results of software decisions, specifically as they related to cyber abuse. Not something I think about very often so it was good to be exposed to the topic and to think what decisions I’m making could be better in preventing cyber abuse.
-
Talk included an overview of security risks and things devs do well and not so well to prevent cyber abuse
-
Cyber abuse involves collection of personal data
-
Cyber abusers benefit from privacy controls which we implement with good intentions
-
One approach is security through obscurity
-
More granular auth access could let owners restrict access one a case by case use
-
Having identities be meaningful only in the context of your app. “Identity-less identity”
-
Have images which have differing levels of privacy. Reverse image search is frequently used by abusers
Open Source Supply Chain Attacks - Feross Aboukhadijeh - 230PM
Focus of talk was around third part dependency vulnerabilities, how to spot them and prevent installing malicious code. Good talk which has implications on day-to-day work. Dev is working on a tool to help spot bad npm packages, we can use it now even though it’s in beta.
-
Open source software is used everywhere
-
Very few of us have read the code we ship to production
-
Vulnerabilities are accidentally introduced
-
Malware is intentionally introduced
-
Be careful with packages which are named similarly to more popular packages. It’s called typo squatting
-
Dependency confusion attacks name the packages similarly to what the internal package might name a file but registers it before the private registry has a chance to.
-
Code on npm can differ from code on GitHub
-
npm config set ignore-scripts true
to prevent scripts from running when installing new packages -
This is a problem that will get worse. What can we do? A tool is on the way to help identify risks
- Socket https://socket.security/
- https://socket.dev/npm/package/bowserify
- Working on way to bring this into GitHub