Network revamp: Linux with Windows

Monday Jun 28th 1999 by Sean Sosik-Hamor
Share:

Overall, integrating Linux is fairly simple and, with proper preparation, can be done over a weekend for a LAN of 20 or 30 end user systems. This tutorial covers the process.



linux tutorial

More and more corporations and small businesses are taking the Linux plunge these days, and for good reason. The low cost of Linux combined with an abundance of geeks who eat, sleep, and breathe UNIX has created a win-win situation for IT managers and geeks alike. Since it's usually a younger junior systems administrator or "that kid from the design group who knows a lot about computers" who introduces Linux into the workplace, there's already someone in-house to move into a full-time systems administration position with no training required.

In other cases, Linux is just stumbled across on accident, as illustrated by one of my previous contracts. About two years ago, I was brought into a large manufacturing outfit to improve their network. They were having problems all the way around the board, from poor network performance to their NT file server grinding to a halt because they lost their systems administrator and never bothered to hire a new one.

When I was brought in to take a look at their network and to do an initial inventory, I found an old Red Hat Linux 4.2 disc kicking around in their supply cabinet. It turns out that an intern who was working for them introduced them to Linux but wasn't skilled enough to do a full installation. Since they needed a zero-maintenance solution and part of my contract was to get their entire LAN up on the Internet with full e-mail and remote access capability, I decided that Red Hat Linux 5.0 (remember, this was 1997) would make the best replacement for their dying NT server.


Inventory and preparation

The first step in the network overhaul was to get a full inventory of everything touching the network. It's always a good idea to have a list of every system in the building for asset control and to find out the exact environment that end users work in on a day to day basis. Here is a basic inventory checklist to use when collecting data (I've used one of the systems from this upgrade as an example):
  • System Name: John's Beast (Windows 95 Revision A)
  • Description: Computer in John's office
  • Workgroup: HEATSHRINK
  • IP Address: N/A
  • Protocols: NetBIOS, IPX, NetBEUI
  • CPU/RAM/HDD: Intel 486dx2/66, 16MB RAM, 420MB HDD
  • Make/Model: Generic Scratchbuilt PC
  • Serial Number: N/A
  • Asset Tag: XXXX-XXXX
  • Owner Name: John Dough
  • Username: doughj
  • Password: lamepass
  • Applications: MS Office 97 (Word, Excel and PowerPoint), Internet Explorer, Microsoft Outlook, SMS Database Client.
  • Documents: C:\JOHNSTUFF\
Since most of the Windows 95 systems had been neglected and were in pretty poor shape, I decided that it would be best to do a full Windows 95 reinstall after the Linux server came online. That way, I had full control over what got installed, a consistent naming and IP addressing scheme could be instated, and all the little nasty problems that had been cropping up on each system would be wiped out. The last thing you want to happen is for management to blame the new Linux server for Windows 95 crashing.

Also, since a strict backup policy was set in place, I had each end user move all documents from their C: drive to their NT network drive. (Getting end users to actually do this was like pulling teeth, but they tended to comply once I told them that they'd lose three years worth of work when I formatted their hard drive if they don't move their files to the server.) After all users moved their files to the NT server, I started server inventory.

There were two NT production servers online. Both were virtually identical P133 systems with 64MB RAM and 4GB SCSI drives running Windows NT 4.0 Server. The NT server I wouldn't be touching for this upgrade was their domain controller and inventory, tracking, accounting and manufacturing system. Although it needed major work, it wasn't the focus of this job. The second system was just a file server running as a secondary domain controller. The file and directory structure was extremely simple showing that each user had a home directory and there were two shared group directories: an accounting/human relations area and a manufacturing/design area.

Unfortunately, there was no solid username policy in place so the upgrade wouldn't be as transparent as I had hoped. All users were reassigned new eight-character-or-less usernames based on their last name and first initial. A new password policy was also put in place to require alphanumerics and at least one punctuation mark. Since the network was no longer an island and would be connected to the Internet, this was the first security policy set in place.

Username migration took place around 3:00am after the last set of backups fired off. I simply changed the usernames on the domain controller and then pushed the changes to the secondary domain controller. The whole process only took about 30 minutes, including walking from PC to PC changing usernames so the end users wouldn't even have to type in their new name. The next day, each user was assigned a new password and their new username and password was noted on the inventory sheet.


Installing Linux

Since I couldn't afford to take a chance at bringing the NT server offline, I decided to do a redundant install. One of the spare 486dx2/66 systems with 16MB RAM and a 540MB HDD was initially used for their testbed Linux server, but after seeing the performance increase over the P133 running NT, it was decided to keep the 486 in place as their primary proxy, e-mail, Web, and file server.

I decided that using the 540MB HDD as a boot disk and then adding a 7.2GB HDD for /home and /var would be the best configuration. Red Hat Linux 5.0 was chosen because of my experience with it and its flawless performance as a high-load Web server. Had I been doing an equivalent installation, I would have chosen OpenBSD 2.5 or Red Hat Linux 6.0. I used the following partition table to squeeze the most space out of the drive:



/dev/hda1   35MB  /
/dev/hda2   75MB  swap
/dev/hda3  350MB  /usr
/dev/hda4   80MB  /tmp
/dev/hdb1  500MB  /var
/dev/hdb2  6.7GB  /home

Although disk requirements for Red Hat Linux 6.0 and other various distributions have started to skyrocket for a full installation, the above partition table works great for a small server with minimal services and packages installed. The minimal approach was chosen primarily because of security and the lack of funds for new hardware. Besides, the more simple the system, the more secure it is and the easier the maintenance.

Custom package installation was used and only the Base, Network, DNS, Email, FTP, Samba, and Web Server packages were installed. Because the server would rarely be used at the command line, niceties such as Emacs, IRC, and Netscape weren't installed and only the bare minimum tools would be used. The only other non-standard tools that were installed were *hobbit*'s netcat, qmail, sniffit, trafshow, and ssh.

Once the system was up and running, all non-essential services were disabled. Everything in /etc/inetd.conf was turned off except for FTP, IMAP, POP3, and SMTP (for qmail). All daemons were disabled except for httpd and sshd and the passwd file was then shadowed using pwconv5 (Red Hat Linux 6.0 automatically shadows now). As part of the new security policy, disabling everything that wasn't used ensured the system was as secure as possible.



After the system was locked down and hardened, it was brought up on the network. An IPRoute router had already been installed so Internet connectivity was available via dial on demand 56Kbps. All current updates were downloaded from a RedHat updates mirror and installed to bring the system up to current specification. Copies of all installed updates and non-standard packages were kept in /home/ftp/public/updates so they'd be available if any other Linux systems were ever brought online.


Migrating data

After the Linux server was ready to be deployed, it was decided that data migration should take place over a weekend so the transition would be as transparent as possible. Samba was configured to run as a secondary domain controller so it authenticated users off the NT server. This method was chosen for both simplicity and security. Since the username and password was the same for both the NT server and the Samba file server, passwords were cached and the user was automatically logged in. It also reduced the chances of /etc/passwd being compromised. For Samba configuration, each line of smb.conf is extensively documented, more so than the documentation that comes with most software.

To make sure all data was moved from the NT server to the Linux server reliably, I double-checked permissions on the NT server to make sure that users had read privileges to all files in their home directories. Once it was guaranteed that each user had the proper permissions, I ran smbmount as root and mounted each users' home directory from \\ntserver\username to /mnt/username. I then logged in as each user individually and ran script backuplog.txt to start up a keystroke and output logging session. After that, it was a simple matter to run cp -avR /mnt/username/* ~ to copy all files into their home directories.

I then double-checked backuplog.txt for each user to make sure there were no errors. Once all users were finished copying, I created manufact and humanrel accounts for the manufacturing and human relations shared directories. I followed the same process to copy over those shared directories, then set group read/write permissions and revoked all world permissions. After double-checking the group lists on the NT server, the appropriate user accounts were added to the manufact and humanrel groups so each user would have access to their shared files.

To test the installation, I used one of the Windows 95 systems to log into the network using every single username to make sure that each account worked. Although it's unnecessary to check every single account once three or four random accounts have been verified to work, I took this extra step because it's safer to spend more time verifying than to somehow miss something. Once the accounts were verified to be working and the data intact, a full backup of the NT server was made and the system was shut down and unplugged so it couldn't accidentally be used.


Internet configuration

The entire LAN was connected to the Internet via a standalone IPRoute router and the Linux server acted as a DNS and proxy server using Apache's built-in proxy facility. A proxy was chosen to cache often-accessed Web pages due to the fact that the entire company would be sitting on a 56Kbps dialup and 30 people hitting their homepages at 9:00 AM sharp tends to slow down a network. An alternate solution to Apache proxying would be an optimized Squid2 proxy, and RPMs are now available for Red Hat Linux 6.0.

Since none of the systems on the network had ever used TCP/IP before, it was necessary to come up with an IP addressing scheme and assign the NT server an IP address. For simplicity and to make it possible to monitor the network using Shomiti Surveyor and NAI SnifferPro, the entire network would be converted to TCP/IP from NetBEUI. Because the IPRoute router would act as a Network Address Translation (NAT) firewall and the LAN wouldn't have direct access to the Internet, the 192.168.0.0 reserved IP block for unconnected networks was used.

Since the IPRoute router used NAT, all outbound packets were transparently proxied out to the Internet and all inbound packets were filtered. Only connections on SMTP, httpd, and sshd were accepted and forwarded to the Linux server. Regardless of how the network is connected to the Internet, it's a good idea to come up with a definite IP addressing scheme and use Visio (or equivalent) to put together a network diagram:

  • Network: 192.168.0.0
  • Subnet Mask: 255.255.255.0
  • Broadcast: 192.168.0.255
  • Gateway: 192.168.0.254
  • Nameserver: 192.168.0.1
  • core (Linux): 192.168.0.1
  • base (NT): 192.168.0.2
Once the Internet connection was configured and IP addresses were assigned, the systems needed to know how to get to each other. A caching nameserver (named, included with all versions of Linux) was set up to handle internal DNS and to make requests to the outside world. Since an unconnected block was used for IP addresses, a bogus domain name was used to make sure that no names would conflict with real systems out on the Internet. The Linux server became core.companyname.lab inside the firewall and core.companyname.com outside the firewall. An alternate but less secure method would be to use companyname.com for all systems and put DNS information for internal systems on the external name server (never give out more information about your internal LAN than is needed!).

Both forward and reverse DNS zonefiles were created, then added to the named.boot config file. The ISP's name servers were then added to /etc/resolv.conf as secondary and tertiary name servers for faster lookups on external domain names.

The last step was setting up qmail to replace sendmail. Because sendmail has the reputation of having miscellaneous bugs and security holes pop up a few times a year, I needed to use the most secure mail daemon possible. Once the Linux server was set up and I finished my contract there, chances were that the server would never see a systems administrator again so security was a primary concern. The version of qmail that I installed was the full archive downloaded from www.qmail.org, but now RPM files can be installed for full compatibility with Red Hat Linux 6.0.

Because qmail delivers to /home/username/Mailbox instead of /var/mail/username and no simple patch was available for the current version of IMAP POP3, symbolic links were created so /var/mail/username pointed at /home/username/Mailbox. Although I initially thought that file locking would be a problem, the symbolic link method worked out fine because no one ever interactively read e-mail using pine or mail. Even though the /var/mail entries were symbolic links, qmail still pulled file locking so if someone was downloading their email via IMAP or POP3, qmail would queue the message in /var/qmail/queue until the lock had cleared. Full patches for IMAP are now available from qmail.org so the symbolic links are no longer necessary.

Inbound email and outbound relay configuration then needed to be configured, which was extremely simple. Without relay authentication, it would be possible for external users to push e-mail through the open relay and send spam or fakemail. Both companyname.lab and companyname.com were added to /var/qmail/control/rcpthosts and /var/qmail/control/locals so inbound e-mail would be recognized, then outbound relay was configured using tcp_wrappers:


Reinstalling Windows 95

Now that all of the services had been set up, it was time to do a clean install of Windows 95 on all of the end users' systems. We chose to use OEM copies of Windows 95 OSR2 to keep the systems as stable as possible. A minimum operating system installation with Office 97 and their SMS Database application were installed and copies of the Windows 95 CAB files and all installation programs were made available on the Samba server so the Windows 95 disc didn't need to be left out in the open for users to install drivers. Internet Explorer 4 with Outlook Express was then installed for Web surfing and IMAP e-mail.

IP addresses were then assigned, and manageable names were assigned to each system. For example, 192.168.0.22 in the accounting department would be named Acct-22 (acct-22.companyname.lab) with the description being "James' PC - Room 12." All systems with local printers were then shared so end users could send printouts to each other without going through e-mail first. The added bonus to this situation was that all systems then showed up in Network Neighborhood as a quick reference to display who owned which PC, their room number, their IP address and if they were offline. An HP4000N network laser printer was then configured and queued through the Linux server using lpd and Samba printer sharing.

Overall, integrating Linux is fairly simple and, with proper preparation, can be done over a weekend for a LAN of 20 or 30 end user systems. It's been two years since the Linux server was installed and it's still working just fine today without a systems administrator, using Red Hat's built-in crontabs to prune logfiles and keep everything clean. The record uptime so far is 270 days. Not bad for a salvaged 486 that was about to be thrown out.ø


Related resources

1. Samba, Integrating UNIX and Windows, by John D. Blair.
2. ZedZ Consultants Hub for secure Linux projects and related info.
3. qmail Preferred to Sendmail because of higher security.
4. Red Hat mirror sites index Where to download the latest updates and releases.
5. Squid mirror sites Where to download this proxy caching program, in source code only.


Sean Sosik-Hamor is an Alpha Geek and systems administrator for Lucent Technologies and, in his free time, runs Sosik-Hamor Networks off a T1 out of his basement.


Share:
Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved