الاثنين، 11 مايو 2020

Show HN: Save 100 Hours with $100 for Premium React Native Templates- Atozui.com https://ift.tt/2YUjxrS

Show HN: Save 100 Hours with $100 for Premium React Native Templates- Atozui.com Being the developer from past 10 years, hustle for side projects was always there from the time I got my first job. When any new idea pops out, in the middle of 9-6 job. There was always time crunch to ship the new idea to production ready with top notch quality UI. Finding the right design, how each page should look, what icons to use, what color best suits the app was always consuming most of my time. This was causing me decision fatigue, and couldn't focus on building the product with much focus. Many of my friends were also facing the same issue. So to help myself in the future and people who have the same problem of decision fatigue in choosing colors and UI designs, we built AtoZUi.com. We plan to release wide range of mobile app templates from all walks of life with premium quality UI. Please do checkout the sample expo or play store app and do let us know your feedback on the same and if there are any specific templates that you would like to see. https://atozui.com https://ift.tt/2xUQGIT May 12, 2020 at 07:41AM

Show HN: Jhall – A JavaScript Alternative to Dhall https://ift.tt/3cq1rSu

Show HN: Jhall – A JavaScript Alternative to Dhall https://ift.tt/2AkJR4a May 12, 2020 at 06:01AM

Show HN: Ideation Tool https://ift.tt/2WmYgVQ

Show HN: Ideation Tool https://idea.surge.sh/ May 12, 2020 at 05:44AM

Show HN: Simple IRC Services System https://ift.tt/2WLORq7

Show HN: Simple IRC Services System https://ift.tt/2WH59jW May 12, 2020 at 03:17AM

Show HN: Real-Time Session Invalidation https://ift.tt/2yOgR4b

Show HN: Real-Time Session Invalidation https://ift.tt/2LltTco May 12, 2020 at 01:34AM

Show HN: Flight Instruments in Snap SVG and JavaScript https://ift.tt/2LlrZYY

Show HN: Flight Instruments in Snap SVG and JavaScript https://fl7b8.csb.app/ May 11, 2020 at 08:08PM

Show HN: Space-themed roguelike made in 7 days https://ift.tt/3bu7ynD

Show HN: Space-themed roguelike made in 7 days https://ift.tt/38zIZnJ May 11, 2020 at 08:46PM

Show HN: Always on top webcam for Screen Sharing https://ift.tt/2yDcyJd

Show HN: Always on top webcam for Screen Sharing https://ift.tt/2xThusR May 11, 2020 at 08:11PM

Show HN: Brewlet – The Missing Menulet for Brew.sh https://ift.tt/2YTOzjF

Show HN: Brewlet – The Missing Menulet for Brew.sh https://ift.tt/3coHpHR May 11, 2020 at 05:47PM

Show HN: Product Hunt for Niche Podcasts https://ift.tt/3duopb4

Show HN: Product Hunt for Niche Podcasts https://ift.tt/2YUOLPl May 11, 2020 at 07:15PM

Show HN: Grammarly to Markdown (Browser Extension) https://ift.tt/3fEuJP9

Show HN: Grammarly to Markdown (Browser Extension) https://ift.tt/2AjjRGk May 11, 2020 at 06:53PM

Show HN: Service for aspiring programers to get code reviews from experts https://ift.tt/2Wlr5Sz

Show HN: Service for aspiring programers to get code reviews from experts https://engchannel.com May 11, 2020 at 06:17PM

Launch HN: Data Mechanics (YC S19) – The Simplest Way to Run Apache Spark https://ift.tt/3cmW7iP

Launch HN: Data Mechanics (YC S19) – The Simplest Way to Run Apache Spark Hi HN, We’re JY & Julien, co-founders of Data Mechanics ( https://ift.tt/2Ll02Aw ), a big data platform striving to offer the simplest way to run Apache Spark. Apache Spark is an open-source distributed computing engine. It’s the most used technology in big data. First, because it’s fast (10-100x faster than Hadoop MapReduce). Second, because it offers simple, high-level APIs in Scala, Python, SQL, and R. In a few lines of code, data scientists and engineers can explore data, train machine learning models, and build batch or streaming pipelines over very large datasets (size ranging from 10GBs to PBs). While writing Spark applications is pretty easy, managing their infrastructure, deploying them and keeping them performant and stable in production over time is hard. You need to learn how Apache Spark works under the hood, become an expert with YARN and the JVM, manually choose dozens of infrastructure parameters and Spark configurations, and go through painfully slow iteration cycles to develop, debug, and productionize your app. As you can tell, before starting Data Mechanics, we were frustrated Spark developers. Julien was a data scientist and data engineer at BlaBlaCar and ContentSquare. JY was the Spark infrastructure team lead at Databricks, the data science platform founded by the creators of Spark. We’ve designed Data Mechanics so that our peer data scientists and engineers can focus on their core mission - building models and pipelines - while the platform handles the mechanical DevOps work. To realize this goal, we needed a way to tune infrastructure parameters and Spark configurations automatically. There are dozens of such parameters but the most critical ones are the amount of memory and cpu allocated to each node, the degree of parallelism of Spark, and the way Spark handles all-to-all data transfer stages (called shuffles). It takes a lot of expertise and trial-and-error loops to manually tune those parameters. To do it automatically, we first run the logs and metadata produced by Spark through a set of heuristics that determines if the application is stable and performant. A Bayesian optimization algorithm uses this analysis as well as data from past recent runs to choose a set of parameters to use on the next run. It’s not perfect - it needs a few iterations like an engineer would. But the impact is huge because this happens automatically for each application running on the platform (which would be too time-consuming for an engineer). Take the example of an application gradually going unstable as the input data grows over time. Without us, the application crashes on a random day, and an engineer must spend a day remediating the impact of the outage and debugging the app. Our platform can often anticipate and avoid the outage altogether. The other way we differentiate is by integrating with the popular tools from the data stack. Enterprise data science platforms tend to require their users to abandon their tools to adopt their own end-to-end suite of proprietary solutions: their hosted notebooks, their scheduler, their way of packaging dependencies and version-controlling your code. Instead, our users can connect their Jupyter notebook, their Airflow scheduler, and their favourite IDE directly to the platform. This enables a seamless transition from local development to running at scale on the platform. We also deploy Spark directly on Kubernetes, which wasn’t possible until recently (Spark version 2.3) - most Spark platforms run on YARN instead. This means our users can package their code dependencies on a Docker image and use a lot of k8s-compatible projects for free (for example around secrets management and monitoring). Kubernetes does have its inherent complexity. We hide it from our users by deploying Data Mechanics in their cloud account on a Kubernetes cluster that we manage for them. Our users can simply interact with our web UI and our API/CLI - they don’t need to poke around Kubernetes unless they really want to. The platform is available on AWS, GCP, and Azure. Many of our customers use us for their ETL pipelines, they appreciate the ease of use of the platform and the performance boost from automated tuning. We’ve also helped companies start their first Spark project: a startup is using us to parallelize chemistry computations and accelerate the discovery of drugs. This is our ultimate goal - to make distributed data processing accessible to all. Of course, we share this mission with many companies out there, but we hope you’ll find our angle interesting! We’re excited to share our story with the HN community today and we look forward to hearing about your experience in the data engineering and data science spaces. Have you used Spark and did you feel the frustrations we talked about? If you consider Spark for your next project, does our platform look appealing? We don’t offer self-service deployment yet, but you can schedule a demo with us from the website and we’ll be happy to give you a free trial access in exchange for your feedback. Thank you! May 11, 2020 at 04:58PM

Show HN: React WebGL component: 45 3D formats viewer.Should I industrialize it? https://ift.tt/3ciL0XY

Show HN: React WebGL component: 45 3D formats viewer.Should I industrialize it? https://ift.tt/2WJQX9H May 11, 2020 at 02:48PM

Show HN: Color2k – smallest color manipulation lib, 3x smaller than tinycolor2 https://ift.tt/2Agi70p

Show HN: Color2k – smallest color manipulation lib, 3x smaller than tinycolor2 https://ift.tt/2Lhiz0y May 11, 2020 at 02:34PM

Show HN: Microservices Architecture and Step by Step Implementation on .NET https://ift.tt/3fH9fB9

Show HN: Microservices Architecture and Step by Step Implementation on .NET https://ift.tt/2Wic0l1 May 11, 2020 at 01:24PM

Show HN: Alma – open-source Active Learning data manager https://ift.tt/2WNehDA

Show HN: Alma – open-source Active Learning data manager https://ift.tt/3dClhd8 May 11, 2020 at 11:27AM

Show HN: Collaborative Realtime Task Lists https://ift.tt/2yP6tsW

Show HN: Collaborative Realtime Task Lists https://ift.tt/2zsaAeJ May 11, 2020 at 08:31AM

Show HN: Using 16 Web3.0 Auth Projects with Decentralized, MongoDB-Like Database https://ift.tt/2LvsgsJ

Show HN: Using 16 Web3.0 Auth Projects with Decentralized, MongoDB-Like Database https://ift.tt/35QWJuE May 11, 2020 at 02:18PM

Show HN: Visualize your HackerNews Activity https://ift.tt/2xWK24U

Show HN: Visualize your HackerNews Activity https://ift.tt/2WJILqa May 11, 2020 at 01:55PM

Show HN: Picture Cook – Cook with Pictures https://ift.tt/2zpe6q6

Show HN: Picture Cook – Cook with Pictures http://picturecook.com/ May 11, 2020 at 01:20PM