The internet can be a swamp of hackers, crackers, and hucksters attacking your systems for fun, profit and fraud. Defending your data and applications against this onslaught is a cold war, requiring constant escalation of new techniques against an ever increasing offense.
Clinicians are mobile people. They work in ambulatory offices, hospitals, skilled nursing facilities, on the road, and at home. They have desktops, laptops, tablets, iPhones and iPads. Ideally their applications should run everywhere on everything. That's the reason we've embraced the web for all our built and bought applications. Protecting these web applications from the evils of the internet is a challenge.
Five years ago all of our externally facing web sites were housed within the data center and made available via network address translation (NAT) through an opening in the firewall. We performed periodic penetration testing of our sites. Two years ago, we installed a Web Application Firewall (WAF) and proxy system. We are now in the process of migrating all of our web applications from NAT/firewall accessibility to WAF/Proxy accessibility.
We have a few hundred externally facing web sites. From a security view there are only two types, those that provide access to protected health information content and those that do not. Fortunately more are in the latter than the former.
One of the major motivations for creating a multi-layered defense was the realization that many vendor products are vulnerable and even when problems are identified, vendors can be slow to correct defects. We need "zero day protection" to secure purchased applications against evolving threats.
Technologies to include in a multi-layered defense include:
1. Filter out basic network probes at the border router such as traffic on unused ports
2. Use Intrusion Prevention Systems (IPS) to block common attacks such as SQL Injection and cross site scripting. We block over 10,000 such attacks per day. You could implement multiple IPSs from different vendors to create a suite of features including URL filtering which prevent internal users from accessing known malware sites.
3. A classic firewall and Demilitarized Zone (DMZ) to limit the "attack surface".
Policies and procedures are an important aspect of maintaining a secure environment. When a request is made to host a new application, we start with a Nessus vulnerability scan.
Applications must pass the scan before we will consider hosting them. We built a simple online request form for these requests for access to both track the requests and keep the data a SQL data base. This provides the data source for an automated re-scan of each system.
Penetration testing of internally written applications is a bit more valuable because they are easier to update/correct based on the findings of penetration tests.
One caveat. The quality of penetration testing is highly variable. When we hire firms to attack our applications, we often get a report filled with theoretical risks that are not especially helpful i.e. if your web server was accidentally configured to accept HTTP connections instead of forced HTTPS connections, the application would be vulnerable. That's true and if a meteor struck our data center, we would have many challenges on our hands. When choosing a penetration testing vendor, aim for one that can put their findings in a real world context.
Thus, our mitigation strategy is to apply deep wire based security, utilize many tools including IPS, traditional firewalls, WAF and proxy servers, and perform periodic re-occurring internal scans of all systems that are available externally to our network.
Of course, all of this takes a team of trained professionals.
I hope this is helpful for your own security planning.
No comments:
Post a Comment