1

I want to run my node.js server on my linux server (shared hosting) basically indefinitely.

Originally, I've set it up using forever, which worked for a while. But, I've noticed that after around 2 weeks both the forever daemon and the node.js server processes vanish without a trace. No logs whatsoever.

Then, I switched to what I'm currently using - pm2 as a daemon utility. All things were going fine... except they weren't. Pm2 also was shutting down after some time has passed. Infuriated by this, I set up a cron job that basically restarts the pm2 with my node.js server. All of the output was being sent to my inbox.

Up until this Monday (15/07/2019) all things were fine. However, on Monday, 16 minutes after midnight something broke. Again. And this is the output of the cron job attempting restart of the server:

[PM2] Spawning PM2 daemon with pm2_home=/home/jwroczyn/.pm2
[PM2] PM2 Successfully daemonized
[PM2][ERROR] Interpreter node does not seem to be available
[PM2] Starting /home/jwroczyn/krzemien/krzemien-api/server.js in fork_mode (1 instance)
[PM2] Done.
┌──────────┬────┬─────────┬──────┬─────┬─────────┬─────────┬────────┬─────┬────────┬──────────┬──────────┐
│ App name │ id │ version │ mode │ pid │ status  │ restart │ uptime │ cpu │ mem    │ user     │ watching │
├──────────┼────┼─────────┼──────┼─────┼─────────┼─────────┼────────┼─────┼────────┼──────────┼──────────┤
│ server   │ 0  │ 1.0.6   │ fork │ N/A │ errored │ 0       │ 0      │ 0%  │ 0 B    │ jwroczyn │ disabled │
└──────────┴────┴─────────┴──────┴─────┴─────────┴─────────┴────────┴─────┴──────
 Use `pm2 show <id|name>` to get more details about an app

Status: errored.

Which switched to online only because I killed all pm2 processes and restarted the daemon again.

Unfortunately, I didn't save the output of the pm2 show command (basically no useful information whatsoever). However, when I ran pm2 monit, the value of the memory usage by the server was something like %f25 - not exactly number-like. Not sure, what to make of that memory usage.

I do not know what to do anymore.

I just want to have my node.js server running normally.

Jerry Sky
  • 43
  • 6

1 Answers1

2

The red flag for me upon reading your post is shared hosting. The symptoms you are experiencing are typical signs of shared hosting limitations:

  • Processes killed after some time
  • No logs of failed processes (probably because they were killed with SIGKILL)
  • Custom binary (in your case, node) disappearing or becoming inaccessible

Process control daemons like PM2, forever, and Supervisor are good at keeping their child processes alive by restarting them, but there's nothing protecting them if they themselves are killed.

In shared hosting, user processes are expected to consume resources when a visitor is accessing the user's website and then release the resources after the page is finished being generated. PHP under Apache normally does this, but Node.js applications are their own servers that reserve their own resources indefinitely.

Shared hosting providers like to kill long-running processes because it means they are dedicating memory and CPU time to you, and that is not in their interest of overselling. They can easily do this―see the RLIMIT_CPU resource limit in setrlimit(2):

RLIMIT_CPU

CPU time limit in seconds. […] If the process continues to consume CPU time, it will be sent SIGXCPU once per second until the hard limit is reached, at which time it is sent SIGKILL.

You found a workaround by restarting the process with cron, which resets the process limit accounting. This seems to have worked for you until your account hit some other limit or a server administrator disabled your Node interpreter. Perhaps your hosting provider noticed that the processes they were trying to kill were respawning, so they may have just prevented node from executing either by deleting it (rm node) or by turning off the execute bit (chmod -x node).


You can either keep fighting the limits on your shared hosting account, or you could switch to a different type of hosting that dedicates resources to you:

  • Dedicated servers, as their name suggests, may be your best bet because the resources are dedicated to you. By default, there would not be a limit policy or an administrator killing your processes.
  • Some virtual private server providers may offer some dedicated resources as well, but they are also prone to overselling.
  • A more modern approach that bills based on usage rather than a fixed rate is serverless computing. This billing model gives resources to your application based on your need and is "dedicated" in that sense, though your application is executed on a server or servers shared with other isolated applications.

If you choose to keep creating workarounds, they'll just get more complex, and your uptime will continue to suffer. Your shared hosting provider may even suspend you for trying to bypass their limits.

Deltik
  • 19,353
  • 17
  • 73
  • 114
  • Thank you. That is a very thorough and clear explanation. I was considering to switch to a VPS anyway. But, you're saying that the problem I encountered may also happen on a VPS, why is that? Are they still may be able to kill or bottleneck some of the processes running on my private, separate VPS? – Jerry Sky Jul 17 '19 at 12:56
  • @JerrySky: VPS providers can also oversell and impose limits. This is more common with those using [containers](https://en.wikipedia.org/wiki/OS-level_virtualisation) (OpenVZ, Virtuozzo, LXC, etc.) as the virtualization technology because they have granular control over VPS processes. This is less common with those using [hypervisors](https://en.wikipedia.org/wiki/Hypervisor) (KVM, Xen, Hyper-V, etc.), but it's still possible to throttle the VPS's resources as a whole. (related: [virt type](https://superuser.com/a/1326891/83694), [VZFS disk limits](https://superuser.com/a/959221/83694)) – Deltik Jul 17 '19 at 14:04