الاثنين، 13 يوليو 2020

Show HN: Buttery, a DSL/runtime for defining HTTP APIs https://ift.tt/2OzAJN3

Show HN: Buttery, a DSL/runtime for defining HTTP APIs https://ift.tt/3gWGWys July 14, 2020 at 02:16AM

Show HN: Corona Cases https://ift.tt/38XfA8S

Show HN: Corona Cases https://ift.tt/32fRgxm July 14, 2020 at 01:50AM

How the SFMTA is Supporting Small Businesses 

How the SFMTA is Supporting Small Businesses 
By Bradley Dunn

The SFMTA, along with our city agency partners, is committed to working with local businesses to protect public health and ensure our transportation system supports a strong economic recovery. Small businesses are the lifeblood of San Francisco and as we work to recover, working with businesses is a key part of our Transportation Recovery Plan

Below are some of the ways the SFMTA is partnering with other city agencies to support businesses. 

Shared Spaces 

To support small businesses, the SFMTA is working with agency partners to fast track permits enabling businesses to utilize the public right-of-way for their operations. The Shared Spaces effort includes using the curb along requesting business frontages to provide space for curbside pickup and delivery, outdoor dining or physical distancing where queues form. Note that not every business’s application will meet the criteria. Learn more about the program and apply here.   

Parking Enforcement 

As economic activity increases, we are supporting parking availability and curb access as a strategy to provide access to commercial corridors and local small businesses. Our goal is to set parking meter rates so that one or two spaces of parking is available on every block. That way patrons can visit local businesses without needing to circle to find parking, saving customers time and reducing frustration, all while reducing greenhouse gas emissions. 

While meter rates vary throughout San Francisco, our plan restores meter prices to near pre-COVID-19 levels with a $0.50/hour decrease. We will also be restoring pre-COVID parking meter time limits enabling customers critical to the health of small businesses to access commercial corridors.   

Our approach to recovery is driven by data and parking is no different. We hope to accelerate the demand-responsive pricing process to be flexible and tailor our parking policies to best serve the businesses in each commercial corridor. We typically reevaluate and adjust meter prices (whether up, down or staying the same) based on demand data every three months by $0.25. We plan on speeding that process up to every six weeks so we can better reflect San Francisco’s changing needs as the economy reopens.  

Transit  

If just a fraction of the people riding transit before the health crisis begins driving alone, congestion will be so bad that it will cripple San Francisco’s economic recovery. Without helping employees and customers move about San Francisco, small businesses will suffer. 

As the health orders allow more activity, we will be increasing Muni service and installing temporary emergency transit lanes to help reducing crowding. Transit lanes allow buses to complete their routes faster. This enables us to minimize the risk for employees and customers that must use Muni for essential trips, with minimal resources.  

When Muni Metro service returns in August, we will implement temporary changes that address longstanding reliability challenges created by having all our rail lines entering the Metro tunnels. This operational structure has caused delays for employees getting to and from work for years. By linking the L Taraval and K Ingelside (with transfers for Downtown customers at West Portal) and having the J Church terminate at Duboce and Church (where customers can transfer to the N Judah and go downtown), we can reduce delays in the subway. These changes will be automatically removed 120 days after the emergency order is lifted unless there is a public process to make the improvements permanent. We will be getting public input about these improvements and evaluating their effectiveness to inform potential long term changes. 

Slow Streets 

To provide more space for people to bicycle or walk around their neighborhood, including to their local commercial corridors, we have implemented 24 miles of Slow Streets with an additional 10 miles to come. These traffic-calmed streets provide more space for bicycling and walking, enabling space on Muni to be used for essential trips by people who have no other options. We hope that these streets encourage San Franciscans to shop in their neighborhood and support local businesses. 

Additional Resources 

If you run a small business, there are additional City and County of San Francisco resources for small businesses to help during this time. You can find information on Small Business loans and grants; information about how to safely get back to business in the new normal; opportunities to defer business taxes and licensing fees; accessing free COVID-19 testing for essential employees and resources for self-employed individuals at oewd.org/covid19. We look forward to continuing our work with small businesses as we support the city’s recovery efforts. 



Published July 13, 2020 at 10:34PM
https://ift.tt/303Vd5S

Show HN: Login with Matrix https://ift.tt/30b5Svk

Show HN: Login with Matrix https://ift.tt/3iWjMdn July 13, 2020 at 06:54PM

Show HN: Learn coding by building 3D structures https://ift.tt/2WdZFhh

Show HN: Learn coding by building 3D structures https://learn3d.io/ July 13, 2020 at 06:43PM

Show HN: A Twitter Clone (Hobby Project) https://ift.tt/3eoSIjD

Show HN: A Twitter Clone (Hobby Project) https://ift.tt/2UTK06M July 13, 2020 at 04:51PM

Show HN: Fw – faster workspace (workspace productivity booster) https://ift.tt/3j1HF38

Show HN: Fw – faster workspace (workspace productivity booster) https://ift.tt/2qfFfFu July 13, 2020 at 04:43PM

Launch HN: Aquarium (YC S20) – Improve Your ML Dataset Quality https://ift.tt/3ew54Xm

Launch HN: Aquarium (YC S20) – Improve Your ML Dataset Quality Hi everyone! I’m Peter from Aquarium ( https://ift.tt/3dwAufn ). We help deep learning developers find problems in their datasets and models, then help fix them by smartly curating their datasets. We want to build the same high-power tooling for data curation that sophisticated ML companies like Cruise, Waymo, and Tesla have and bring it to the masses. ML models are defined by a combination of code and the data that the code trains on. A programmer must think hard about what behavior they want from their model, assemble a dataset of labeled examples of what they want their model to do, and then train their model on that dataset. As they encounter errors in production, they must collect and label data for the model to train on to fix these errors, and verify they're fixed by monitoring the model’s performance on a test set with previous failure cases. See Andrej Karpathy’s Software 2.0 article ( https://ift.tt/2hsOCzx ) for a great description of this workflow. My cofounder Quinn and I were early engineers at Cruise Automation (YC W14), where we built the perception stack + ML infrastructure for self driving cars. Quinn was tech lead of the ML infrastructure team and I was tech lead for the Perception team. We frequently ran into problems with our dataset that we needed to fix, and we found that most model improvement came from improvement to a dataset’s variety and quality. Basically, ML models are only as good as the datasets they’re trained on. ML datasets need variety so the model can train on the types of data that it will see in production environments. In one case, a safety driver noticed that our car was not detecting green construction cones. Why? When we looked into our dataset, it turned out that almost all of the cones we had labeled were orange. Our model had not seen many examples of green cones at training time, so it was performing quite badly on this object in production. We found and labeled more green cones into our training dataset, retrained the model, and it detected green cones just fine. ML datasets need clean and consistent data so the model does not learn the wrong behavior. In another case, we retrained our model on a new batch of data that came from our labelers and it was performing much worse on detecting “slow signs” in our test dataset. After days of careful investigation, we realized it was due to a change to our labeling process that caused our labelers to label many “speed limit signs” as “slow signs,” which was confusing the model and causing it to perform badly on detecting “slow signs.” We fixed our labeling process, did an additional QA pass over our dataset to fix the bad labels, retrained our model on the clean data, and the problems went away. While there’s a lot of tooling out there to debug and improve code, there’s not a lot of tooling to debug and improve datasets. As a result, it’s extremely painful to identify issues with variety and quality and appropriately modify datasets to fix them. ML engineers often encounter scenarios like: Your model’s accuracy measured on the test set is at 80%. You abstractly understand that the model is failing on the remaining 20% and you have no idea why. Your model does great on your test set but performs disastrously when you deploy it to production and you have no idea why. You retrain your model on some new data that came in, it’s worse, and you have no idea why. ML teams want to understand what’s in their datasets, find problems in their dataset and model performance, and then edit / sample data to fix these problems. Most teams end up building their own one-off tooling in-house that isn’t very good. This tooling typically relies on naive methods of data curation that are really manual and involve “eyeballing” many examples in your dataset to discover labeling errors / failure patterns. This works well for small datasets but starts to fail as your dataset size grows above a few thousand examples. Aquarium’s technology relies on letting your trained ML model do the work of guiding what parts of the dataset to pay attention to. Users can get started by submitting their labels and corresponding model predictions through our API. Then Aquarium lets users drill into their model performance - for example, visualize all examples where we confused a labeled car for a pedestrian from this date range - so users can understand the different failure modes of a model. Aquarium also finds examples where your model has the highest loss / disagreement with your labeled dataset, which tends to surface many labeling errors (ie, the model is right and the label is wrong!). Users can also provide their model's embeddings for each entry, which are an anonymized representation of what their model “thought” about the data. The neural network embeddings for a datapoint (generated by either our users’ neural networks or by our stable of pretrained nets) encode the input data into a relatively short vector of floats. We can then identify outliers and group together examples in a dataset by analyzing the distances between these embeddings. We also provide a nice thousand-foot-view visualization of embeddings that allows users to zoom into interesting parts of their dataset. ( https://youtu.be/DHABgXXe-Fs?t=139 ) Since embeddings can be extracted from most neural networks, this makes our platform very general. We have successfully analyzed dataset + models operating on images, 3D point clouds from depth sensors, and audio. After finding problems, Aquarium helps users solve them by editing or adding data. After finding bad data, Aquarium integrates into our users’ labeling platforms to automatically correct labeling errors. After finding patterns of model failures, Aquarium samples similar examples from users’ unlabeled datasets (green cones) and sends those to labeling. Think about this as a platform for interactive learning. By focusing on the most “important” areas of the dataset that the model is consistently getting wrong, we increase the leverage of ML teams to sift through massive datasets and decide on the proper corrective action to improve their model performance. Our goal is to build tools to reduce or eliminate the need for ML engineers to handhold the process of improving model performance through data curation - basically, Andrej Karpathy’s Operation Vacation concept ( https://youtu.be/g2R2T631x7k?t=820 ) as a service. If any of those experiences speak to you, we’d love to hear your thoughts and feedback. We’ll be here to answer any questions you might have! July 13, 2020 at 05:05PM

Show HN: A Simple Search Engine https://ift.tt/3eqVpRw

Show HN: A Simple Search Engine https://kuurio.com July 13, 2020 at 04:58PM

Show HN: Income/savings calculator for moving to Canada https://ift.tt/2AXRfTK

Show HN: Income/savings calculator for moving to Canada https://ift.tt/2Zpzcz3 July 13, 2020 at 04:47PM

Show HN: Simple Google Login in Go https://ift.tt/2On9BRb

Show HN: Simple Google Login in Go https://ift.tt/2OjvCjV July 13, 2020 at 12:35PM

Show HN: Primo – all-in-one IDE, CMS, component library, static site generator https://ift.tt/3gTyJLp

Show HN: Primo – all-in-one IDE, CMS, component library, static site generator https://primo.af July 13, 2020 at 02:51PM

Show HN: Soup.io Downloader https://ift.tt/2OmPHWy

Show HN: Soup.io Downloader https://ift.tt/305breP July 13, 2020 at 11:11AM

Show HN: A thread hierarchy management library in C https://ift.tt/3frvDxX

Show HN: A thread hierarchy management library in C https://ift.tt/2CuVYwu July 13, 2020 at 02:21PM