Michael Seibel

Porter - Heroku that runs in your own cloud

Porter is a platform that makes AWS/GCP as easy to use as Heroku. With instant deploys from Git, built-in autoscaling, and automatic SSL, Porter gives you the convenience of a PaaS while preserving flexibility and control.

Add a comment

Replies

Best
Sabrina Hahn
Awesome product, congrats on the launch!
Justin Rhee
Thanks @hahnhouse 🙏
John Taylor Garner
Congrats on the launch! Amazing stuff.
Justin Rhee
Thanks @john_garner!
Christophe Dumont
looks promising! i'll definitely try it on my next project
Justin Rhee
@christophdumont Thanks, let us know what you think!
Scott Ames-Messinger
Looks great! Kinda specific question here: What types of built in metrics are there? Also, if I want to see if my users are seeing 50x before requests reach my app server (e.g. at the AWS load balancer or nginix ingress level), can I tell within Porter or do I need to roll my own observability for that? With other K8s solutions out there, having visibility into requests before they hit app servers has been lacking and leaves me worried we'd have errors we're not getting metrics for.
Alexander Belanger
@scott_ames_messinger thanks! We use Prometheus as the built-in metrics solution, exposing some pretty basic defaults: the rate of network received bytes, the memory usage bytes, and the CPU usage (for each metric, you can view it per-pod or summed across all pods in a deployment). Since we install the NGINX ingress controller by default, you would be able to view these metrics for NGINX, but not at the AWS load balancer level. We're experimenting with different metrics views since they're pretty easy to implement with Prometheus -- for example, for the NGINX ingress controller which exposes custom metrics, we could display things like upstream latency, number of connections, etc. Feel free to reach out with more specific questions!
Scott Ames-Messinger
@abelanger Are you all using the AWS NLB or ALB? If the ALB, we can see those request metrics in the AWS console which would be fine. If the NLB, having NGINX metrics would be vital. Otherwise, we're just flying blind and NGINX could be throwing 502s and we're none the wiser. If you all add those, please let me know! I'd love to migrate over. Also, I didn't see any documentation about this, but I'm assuming it's easy to rollback to previous versions (like in heroku) and environment management is done through the web ui?
Alexander Belanger
@scott_ames_messinger We support both the NLB and ALB based on the ingress annotation, but default to NLB pointing to the NGINX ingress controller. This setup seems to be the best user experience for custom domains due to the static IP address that gets assigned, so I think we're going to be primarily supporting NLB + NGINX in the short to medium term. Will definitely reach out when we have custom NGINX metrics! In terms of rollback, yes, you can roll back to any previous version of the application. For environment management, we don't have the equivalent of Heroku pipelines or preview environments (yet!), but we do support multiple clusters per project that you could use for staging/production.
Scott Ames-Messinger
@abelanger NLB makes total sense. Love to hear when you get NGINX metrics!
Arjun Mahadevan
Congrats on the launch team 🚀
Justin Rhee
Thanks @arjmahadevan 🙏
Saam Venkat
Looks very easy to use and quite flexible. Great tool
Justin Rhee
@saam_venkat Thanks!
Abhinav Reddy
Looks great! Love that it is open-source to boot.
Justin Rhee
Tosh
Awesome tool! Great founders!
Justin Rhee
@toshvelaga thanks, we appreciate it!
vivek dwivedi
This is going to be as big as postman someday. I can bet anything on it.
Justin Rhee
Angeline Tan
Easy to use, amazing UI and open source - phenomenal factors! Good job guys!
Justin Rhee
Thanks @angelinetan, really appreciate it!