Linux by Trial and Error

A repository of the things I learn about Linux

SSH RSA Key Errors

Recently, I received an error when trying to connect to a new server via SSH. It was a newly built server and was built using Red Hat‘s Kickstart. I had already built other servers using the same Kickstart profile, so I figured the script was not likely the problem.

Before I get into that, let’s first go over the error I was getting when I tried to connect via SSH:

buffer_get_ret: trying to get more bytes 257 than in buffer 235
buffer_get_string_ret: buffer_get failed
buffer_get_bignum2_ret: invalid bignum
key_from_blob: can’t read rsa key
key_read: key_from_blob AAAAB3NzaC1yc2EAAZABIwAAAQEAwnCNsm+WKwBR8hSAInR4t3WgVGuvVY6xGz7Udo0jLRL/vpJbq1Kb0QupZ3qK8dnDbPbjCpC9w523MbraXXToyTP6riMXD19H1QfaeROY1fTv8ev7ZvNnfaHoN/Ifz3uPsKtRPmRKsxgF0/+2wmei2WLGDiHzOi7tiUXhSnLrgd7dldUtahOlw3tbp+GBVlTRenDbokXwi8Ru5oWqkY6jyBRVhDMO8AgowukNj/CoXQY59w6SI+ngEFxpCnSO78LuIRWceSSAsBXunr+843VbgBdgnIYaT0sMICQy/ieGiBoqT3pe166mWC failed
buffer_get_ret: trying to get more bytes 257 than in buffer 235
buffer_get_string_ret: buffer_get failed
buffer_get_bignum2_ret: invalid bignum
key_from_blob: can’t read rsa key
key_read: key_from_blob AAAAB3NzaC1yc2EAAZABIwAAAQEAwnCNsm+WKwBR8hSAInR4t3WgVGuvVY6xGz7Udo0jLRL/vpJbq1Kb0QupZ3qK8dnDbPbjCpC9w523MbraXXToyTP6riMXD19H1QfaeROY1fTv8ev7ZvNnfaHoN/Ifz3uPsKtRPmRKsxgF0/+2wmei2WLGDiHzOi7tiUXhSnLrgd7dldUtahOlw3tbp+GBVlTRenDbokXwi8Ru5oWqkY6jyBRVhDMO8AgowukNj/CoXQY59w6SI+ngEFxpCnSO78LuIRWceSSAsBXunr+843VbgBdgnIYaT0sMICQy/ieGiBoqT3pe166mWC failed
The authenticity of host ‘rhnsat1 (10.0.175.12)’ can’t be established.
RSA key fingerprint is 9d:5d:78:45:fb:6e:a5:2e:5b:58:83:ac:2b:af:b9:24.
Are you sure you want to continue connecting (yes/no)? no
Host key verification failed.

Now, to be clear, this did not prevent me from logging into the server. But it logged me in while warning me that the authenticity of the server could not be verified.  So, what’s an admin to do? Search Google, of course.

Unfortunately, that search was fruitless as every hit I got related to this kind of error was referencing the use of RSA key authentication and I was not using key authentication. I was simply using a username/password which authenticates to LDAP. Imagine my chagrin.

As a quick test, I logged in as root on my system and attempted an SSH connection to the remote server. This time, I did not get the error.  I logged off and back in and still didn’t get the error. Next, I logged out as root, back in as me and looked in my ~/.ssh/known_hosts file. There was my new server at the end of the file. I removed it and went to save the file….and got an error that it couldn’t write….Interesting.

Logged back in as root and edited the file and it let me remove the entry for that new server and save my known_hosts file just fine. This led me to verify the permissions on the known_hosts file. They were correct. I owned it and I had RW access to it. So, why did it not allow me to save it?

I ran ‘df -h’ on a hunch and what, to my wondering eyes should appear? A line showing that my /home partition was at 100% capacity! Well, darn it all if that didn’t explain a thing or two.

As it turns out, I had recently attempted to download a particularly large file (the RHEL6.4 DVD .iso) and it hadn’t downloaded because it was too large. But I had forgotten that I never went back and deleted the partial file.

Once that file was deleted, I was able to edit my known_hosts file, SSH to my new server, log back out, re-SSH to my new server and my errors were a thing of the past.

*whew*

Not sure why, but it always seems to come down to the little things. So, hopefully, there is someone out there who is getting this error and is NOT using RSA key authentication and will find this helpful. If not, it can at least serve as a reminder to me to check this in the future.

July 30, 2013 Posted by | authentication, errors, ssh | , , | Leave a comment

LDAP Directory Server on CentOS 6.3 Using TLS

This is essentially a continuation of my last post because I needed to set up a CA to sign certs in order to configure my Directory Server to use TLS. As before, I’m using a CentOS 6.3 VM which was built using the minimal installation .iso and all responses to prompts are exactly as I used them on my system in order to keep an exact record of how I built this server.

If  you do not currently have a CA to sign certificate requests and are willing to use your own self-created, self-signed CA, follow the steps of my last post and you’ll be right as rain. In fact, as I’m writing, I’m bouncing back and forth between writing this one and the last one because I haven’t generated the certificate request that I documented in the other post yet.

And now, onto my continuation of my LDAP server installation & configuration.

  1. The first thing you’ll need is to set up the EPEL repository so that you can get the full Directory Service package and to do that, you’ll need a way to download the rpm.
    # yum install wget
  2. Now, download the epel repository installation rpm from one of the Fedora mirrors
    # wget http://ftp.osuosl.org/pub/fedora-epel/6/i386/epel-release-6-8.noarch.rpm
  3. Install the EPEL rpm package
    # rpm -ivh epel-release-6-8.noarch.rpm
  4. Use yum to install the 389-ds package
    # yum install 389-ds
  5. If you are not on a network with a DNS server which has your servers hostname/IP address, you’ll need an entry in your /etc/hosts file. I added the following entry into my /etc/hosts:
    192.168.122.183       ds1    ds1.example.com
  6. Run the setup program
    # setup-ds-admin.pl
    Continue: yes
    Continue: yes
    Setup Type: 2
    Computer Name: ds1.example.com
    Proceed with host name: yes
    System user: nobody
    System group: nobody
    Register with existing Dir Server: no
    DirSrv Admin: admin
    DirSrv password: DirServerPwd
    Confirm password: DirServerPwd
    Admin Domain: example.com
    Dir Srv Port: 389
    Dir Srv ID: ds1
    Suffix: dc=example,dc=com
    Dir Manager DN: cn=Directory Manager
    Dir Manager Password: DirMgrPwd
    Confirm Password: DirMgrPwd
    Admin port: 9830
    Set up servers: yes
  7. For the next steps, you’ll need X-Windows access. This is probably overkill, but just to get things up and running, I ran:
    # yum install xorg-x11-xerver-Xorg xorg-x11-xauth xorg-x11-fonts*
  8. Next, since you’ve only got the root user, you’ll need to modify /etc/ssh/sshd_config using vi (or other editor of choice)
    PermitRootLogin Yes
  9. Connect to  your new server with ssh:
    # ssh root@192.168.122.183
  10. Start the Directory Server Console
    # 389-console &
  11. When prompted to sign into the console, enter the login information:
    User ID: cn=Directory Manager
    Password: DirMgrPwd
    Admin URL: http://localhost:9830
  12. Expand the entry for ds1.example.com
  13. Expand the entry for Server Group
  14. Double-click on Directory Server (ds1)
  15. Select Manage Certificates
  16. When prompted, set up a new password for the private key:
    New Password: DirSrvKeyPwd
    New Password (again): DirSrvKeyPwd
  17. Change to the CA Certs tab
  18. From your shell, get the text for  your CA cert:
    # cat /etc/pki/CA/certs/ca-cert.crt
  19. Copy the contents of the ca-cert.crt to your clipboard
  20. From your Manage Certificates, CA tab, select Install…
  21. Select Paste from Clipboard and then select Next
  22. At the Certificate Information screen, select Next
  23. At the Certificate Type screen, select Next
  24. At the Intended Purpose screen, select Done
  25. Back on the Server Certs tab, select Request…
  26. At the Introduction screen, select Next
  27. Enter Requestor Information and select Next
    Server name: ds1
    Organization: Example
    Organizational Unit: Example
    City: Topeka
    State: Kansas
    Country: US United States
  28. Enter the private key password from Step 16 above and select Next:
    DirSrvKeyPwd
  29. At the Request Submission screen, select Save to file
    File Name: ds1.csr
  30. Select Done
  31. Now, you can follow the steps from the post on setting up your CA. The latter part of that post details how to use this certificate request that we just generated to create a new, signed cert for our Directory Server.
  32. Get the contents of your signed cert
    # cat ds1.pem
  33. From your Manage Certificates screen, on the Server Certs tab, select Install…
  34. From the Certificate Location screen, select Paste from Clipboard and then select Next
  35. From the Certificate Information screen, verify that the information is correct and select Next
  36. From the Certificate Type screen, select Next
  37. From the Token Password screen, enter your private key password and select Done
    DirSrvKeyPwd
  38. From the Manage Certificates screen, select Close
  39. Now, close the Directory Server window and go back to the Console
  40. Double-click the Administration Server
  41. From the Admin Server, select Manage Certificates
  42. Create a new password for the admin server:
    New Password: DirAdminSrvPwd
    New Password (again): DirAdminSrvPwd
  43. From the CA Certs tab, select Install…
  44. From a shell, get the contents of the CA cert and copy the contents to the clipboard
  45. From the Certificate Location screen, select Past from Clipboard and select Next
  46. From the Certificate Information screen, verify the information and select Next
  47. From the Certificate Type screen, select Next
  48. From the Intended Purpose screen, select Done
  49. Back on the Server Certs tab, select Request…
  50. From the Introduction screen, select Next
  51. From the Requestor Information screen, fill out the information and select Next
    Server Name: ds1-admin
    Organization: Example
    Organizational Unit: Example
    City: Topeka
    State: Kansas
    Country: US United States
  52. Enter the private key password and select Next
    DirAdminSrvPwd
  53. At the Request Submission screen, select Save to File and then select Done
    File Name: ds1-admin.csr
  54. Once again, refer to the steps to sign your certificate request
  55. Repeat Steps 33-39, remembering to use the correct password in Step 37
    DirAdminSrvPwd
  56. Create a new text file /etc/dirsrv/slapd-ds1/pin.txt with the following contents:
    Internal (Software) Token:DirSrvKeyPwd
  57. From the Console, double-click to open the Directory Server
  58. From the Directory Server, on the Configuration tab, select the Encryption tab to the right
  59. From the Encryption tab, check the “Enable SSL for this server” checkbox
  60. Next, check the “Use this cipher family: RSA” checkbox, leave the rest of the fields at their defaults and select Save.
  61. From the Directory Server, back on the Tasks tab, select Restart Directory Server
  62. From the Configuration tab, select Data and check the “Enable fine-grained password policy” and any other options you wish to ensure that people use good, solid, secure passwords.
  63. Lastly, we want to make sure our Directory Server is set to start when the system is started:
    # chkconfig dirsrv on
    # chkconfig dirsrv-admin on

At this point, you have now installed and configured Directory Server and set it up to use TLS in order to encrypt your logins. Now, this does not mean it is finished, however. Rather than make this post several thousand words long, we are going to leave this as it is today and I will do a separate post later regarding how to make creating users so that you can actually use your new Directory Server.

June 22, 2013 Posted by | authentication, ldap | , , | 2 Comments

Create Self-Signed Certificate Authority in CentOS 6.3

First, a little housekeeping…

The pertinent details of my setup are that I’m running CentOS 6.3 and using the Virtual Machine Manager from the standard CentOS repositories. I created a VM with 15GB of disk and 1GB of memory to build my server. Not the most powerful system in the world, but sufficient to experiment with.

The OS for my VM was installed using the CentOS 6.3 minimal installation .iso so that it didn’t have any bells or whistles installed…just the basic OS. I pretty much did a default installation as I was not concerned with how my hard disk was partitioned as this was going to just be a scratch system.

The following steps were performed verbatim. In other words, since I just created this VM for the sole purpose of documenting these steps, I’m not masking anything. All input is exactly how I’m inputting it at the various prompts. The domain that I used really is example.com and the passwords that I’m putting are the ones that I actually used. By the time you read this, my VM will be destroyed or rebuilt anyway, so it’s not exactly risky to include all the details I used, but I felt it would help to reduce confusion.

After the OS install, I set up the networking and then did a ‘yum update’ to get all my packages up to date. I’m not going to go into detail on all that here as that information is readily available all over the Internet. If you can’t get that far, you’re probably not ready to do this yet, anyway. The openssl package was already installed with the minimal installation.

Now…onto my Certficate Authority setup…

  1. Change directory to the default CA directory:
    # cd /etc/pki/CA
  2. Create an index file for new certs:
    # touch index.txt
  3. Set first certificate number:
    # echo ’01’ > serial
    # echo ’01’ > crlnumber
  4. Create your CA cert and private key for your CA server:
    # openssl req -new -x509 -extensions v3_ca -keyout private/ca-cert.key -out certs/ca-cert.crt -days 365
    Enter PEM pass phrase: PassPhrase
    Confirm PEM pass phrase: PassPhrase
    Country Name: US
    State: Kansas
    City: Topeka
    Organization: Example
    Organizational Unit: Example
    Common Name: CA
    E-mail Address: root@example.com
  5. Set permissions on your private key:
    # chmod 400 private/ca-cert.key

Now, you’re ready to sign certificate requests. When you get a new certificate request, the following is what I did to generate a new cert signed by my very own CA:

  1. From your CA server, change directory to /etc/pki/CA
    # cd /etc/pki/CA
  2. Copy your certificate request to the /etc/pki/CA/crl directory
    # cp /root/ds1.csr /etc/pki/CA/crl
  3. Sign your cert using your CA
    # openssl ca -in crl/ds1.csr -out newcerts/ds1.pem -keyfile private/ca-cert.key -cert certs/ca-cert.crt
    Sign cert? y
    Commit? y

If you get an error about your stateOrProvinceName needing to be the same and it shows that they do, in fact, appear to be the same, what fixed that for me was editing the /etc/pki/tls/openssl.cnf file and setting the value ‘string_mask’ to ‘pkix’ and regenerating my CA cert (Step 4 above).

Hopefully, this will give you all the information you need to set up your CA so that you can sign your own certificates. Keep in mind that this does not mean that other systems/companies/whoever will actually trust your CA, or the certificates that you sign from it. But, for private, internal use you get to control which CA’s you trust and you can add your own to your list of Trusted CA’s.

If you have any questions or if anything is incorrect or unclear, please let me know. My purpose here is to document the steps I took to do this, so they seem to have worked correctly when I did them as described, but that does not mean that there isn’t a better, more efficient way to do it. Feedback is always welcome

March 21, 2013 Posted by | certs | , , , | 9 Comments

The Joys of SELinux

For the past couple of months now, I’ve been trying to learn more about SELinux. On about half of the servers that I have inherited through my current position, SELinux is set up and set to Enforcing. On the other half, it is set to Permissive. There are only a couple that have it disabled entirely (I did say “about” half.)

Because so many of my servers are in Permissive mode, much of my time is spent scouring through the audit.log file and looking at AVC denial messages. To do this, I’ve been doing the following:

# ausearch -m avc -ts yesterday

This gives me a list of every AVC denial message since 12:00:01am the previous day. The reason I’m doing ‘yesterday’ rather than ‘today’ is because ‘today’ only gets me the denials from 12:00:01am on the day it’s run. If I run that command at 9:00am, I’m never going to see any denial messages between 9:00am and midnight. That’s a pretty good chunk of time to ignore.

As you might imagine, I’ve been spending a lot of time with Google lately. Unfortunately, what I have found has not been supremely helpful. I see a lot of hits about SELinux and how to create custom policies and how to use tools like ausearch, audit2why, audit2allow, semanage, sesearch, restorecon and all that. But there is something that is very much lacking in what I am finding.

When facing a particular AVC denial message, the question I have not found the answer to is, “Should I allow this to happen?” In other words, I know that I can do something like this:

# ausearch -m avc -ts yesterday | grep ifconfig_t | audit2allow -a -M local

# semodule -i local.pp

These commands will help me eliminate the AVC denial messages so that they don’t get denied any more. That’s great, and all. But the problem is, I want to know whether or not ifconfig_t should be trying to do whatever is being denied.

Now, it seems to me that we are, to a degree, stepping outside the realm of SELinux strictly speaking. SELinux doesn’t really “care” what sorts of custom policies you add to your system. If you add them, it will be enforced as you have specified. So, how do you answer the question of whether or not you should add a particular custom policy?

Currently, the servers we are running are RHEL5, so the policies for distros such as RHEL6 and the latest Fedora are going to contain more up-to-date information. Since I don’t actually have a RHEL6 box, I’ve done what I hope is the next best thing. I now have a Fedora 17 VM on my desktop (which runs Centos 6.2….so that might not be far off, either).

Now, what I can do is take certain AVC denial messages and run them through audit2allow (like I showed above) and get what the policy would be. In the case of one of the messages I was looking at, it was:

allow ifconfig_t initrc_t:tcp_socket { read write };

Next, I ran the following command to find out if there are any policies that reference this. So, I would run:

# sesearch –auditallow -s ifconfig_t -t initrc_t -c tcp_socket

# sesearch –dontaudit -s ifconfig_t -t initrc_t -c tcp_socket

The first didn’t return anything, but the second showed me a dontaudit rule. Since there is an existing dontaudit rule for these contexts in the standard policy, I figured it was safe to assume that ifconfig_t should have read/write access to tcp_sockets with a context of initrc_t.

The next problem was how to make the custom policy to implement the dontaudit rule since that obviously was not part of the latest policy settings on my RHEL5 servers.

Once again, it was back to Google. Once again, I found a whole lot of information that didn’t seem to help. I found lots of stuff on turning dontaudit on and off (an option that, thanks to Murphy, was not available in my RHEL5 implementation) and lots of information showing examples of dontaudit entries. But I was not finding anything about how to create a custom dontaudit rule. (There’s actually more to it, but I don’t want this post getting too long).

There was also some hits on how to compile a .te file using the ‘make’ command and such, but that just wasn’t working for me. I finally found something that might help, so I decided to give it a try and, sure enough, it looks like it worked.

So, here is what I did. First, I let audit2allow build both my .te and .pp files:

# ausearch -m avc -ts yesterday | grep ifconfig_t | grep initrc_t | audit2allow -a -M local

Next, I deleted the .pp file:

# rm local.pp

Then I edited the local.te file with vim and changed the “allow” to “dontaudit” and saved the file. Now, I had a .te file with the rule that I wanted to implement, but I need the .pp file to actually get it in place. Thanks to an entry on the Fedora Wiki, I found the following:

# checkmodule -M -m -o local.mod local.te

# semodule_package -o local.pp -m local.mod

# semodule -i local.pp

So, let’s go through these…

The checkmodule takes the .te file and creates a module file (.mod). The ‘-M’ enables the module for MLS/MCS support. This may not be needed if you’re not using MLS/MCS. The ‘-m’ indicates that you are creating a non-base policy module. I take that to mean that this is a sub-module that we’ll be adding to the existing base. The ‘-o’ simply specifies the output file and the last entry ‘local.te’ is the input file.

Next, the semodule_package program, as one might infer, takes the module file and creates a .pp file. The ‘-o’ is still the output file and the ‘-m’ is essentially the input module file you are going to use.

Finally, the semodule command is what installs the package (‘-i’).

There may be a better, faster, more efficient, easier to implement way to get all this done. If there is, please do not hesitate to share as I always prefer to do things the easiest way possible. But, whether this is the easiest or best way to do this or not, it seems to have worked. And hopefully, it may be of some use to you as it was to me.

Until next time…

September 21, 2012 Posted by | selinux | | Leave a comment

Kerberos Authentication on Load Balanced Web Servers

Today is simply a follow-on from yesterday’s post about Kerberos authentication for a Desktop application and a web application. In yesterday’s post, I went over what we did to set up Kerberos authentication for our Dev and QA environments, each of which have a single web server and a single application server.

In our production environment, however, we have two web servers and two application servers. The web servers are handled by a load balancer which provides a virtual IP address (VIP) and determines which web server to send HTTP requests to. The application servers are clustered servers.

Rather than re-hashing a lot of the same steps, it is worth noting that the actual setup of the different Service Principals for the production environment, please read my previous post for creating the Service Principal user accounts in AD and setting up the krb5.conf, krb5_ccache and krb5.keytab files on the server. Basically, all of that remains largely the same except for a couple of things that I will detail out in this post.

Before we get to that, however, we’ll get to how the challenge with this environment manifested and how we went about resolving it…

After having gone through setting up our Dev and QA environments, I moved on to set up the production environment using the same method. Service Principal accounts were created in AD and set the same way we had set the Dev and QA SP’s. The Kerberos config file, cache and keytab files were all created using exactly the same steps as the other servers.

Also, the jaas.conf and tomcat5.conf files were set the same as the other servers. Everything was exactly the same…except for the names of the servers/Service Principals on each system.

Unfortunately, when tomcat was started, we would go to the http://webapp.domain.com and would get “Service Temporarily Unavailable.” If I commented out the following line from the tomcat5.conf file, it would work fine (although then I would have to enter my credentials):

JAVA_OPTS=”${JAVA_OPTS} -Xmx1024m -Djava.security.auth.login.config=/path/to/jaas.conf -Djava.security.krb5.conf=/path/to/krb5/files/krb5.conf -Djavax.security.auth.useSubjectCredsOnly=false”

The only difference between production and Dev/QA was the load balancer. So, off I went to the networking guy…

We found that our load balancer was doing a health check periodically and was using an HTTP call, then looking for specific text in the response. When the above line was uncommented, the load balancer was not authenticated and was therefore not receiving the expected string. Therefore, the load balancer was shutting down the VIP because the server was “unavailable” as far as the health check was concerned.

After changing the health check to a TCP call rather than HTTP, we were able to get to the main page of the web server, though SSO was still not working correctly.

With much fuss and going back and forth about the fact that, still, the only difference seemed to be the load balancer, we decided to look at how our DNS structure was set up.

We had a HOST(A) entry for web-dev.domain.com as well as for web-qa.domain.com and web-prod1.domain.com and web-prod2.domain.com. Then, we had an Alias set up for web-dev.domain.com called webapp.dev.domain.com. We also had an Alias for web-qa.domain.com called webapp.qa.domain.com.

However, when I looked at the production set up, we had a HOST(A) entry for webapp.domain.com which pointed to the VIP.

Hmmm….

Here is what I did:

Following the steps from the previous post, I created a user in AD called HTTP/webapp-prod.domain.com and set up the Service Principal Name.

Next, I got on web-prod1.domain.com and ran:

# kinit HTTP/webapp-prod.domain.com

This added the ticket to the krb5_ccache file. Next, I ran:

# ktutil
ktutil: addent -password -p HTTP/webapp-prod.domain.com -k 2 -e rc4-hmac
Password for HTTP/webapp-prod.domain.com@DOMAIN.COM:
ktutil: addent -password -p HTTP/webapp-prod -k 2 -e rc4-hmac
Password for HTTP/webapp-prod@DOMAIN.COM:
ktutil: wkt /path/to/krb5/files/krb5.keytab
ktutil: quit
#

Then, I did the same thing on web-prod2.domain.com. Now, each production web server had four Kerberos tickets in the keytab file. Two for the local server itself (one fully qualified and the other a short name) and two for HTTP/webapp-prod (one fully qualified and the other a short name).

In my DNS server, I removed the HOST(A) entry for webapp.domain.com and created a new DNS entry for webapp-prod.domain.com which pointed to the VIP, and an Alias for that entry called webapp.domain.com.

Voila!!

I cleared all my temp files and cookies from IE, relaunched the browser and went to http://webapp.domain.com and found myself already logged into the web application and looking at my Dashboard!

Perhaps there is a better way to accomplish what I was trying to do, but since I was not able to find anything that would resolve this issue, this was the best thing I came up with.

If you know of a better way to use Kerberos authentication on a load balanced VIP to a web server, please share your experience. However, if you are struggling with something similar, hopefully this helps!

June 13, 2012 Posted by | kerberos | , , , , , | Leave a comment

Setting Up Kerberos Authentication for App and Web

I have recently been working a lot with getting Kerberos authentication working for one of our enterprise applications. The environment consistes of a database backend, a server application piece and then a web front end.

We have three environments for this application: Dev, QA and Production. Naturally, I wanted to tackle the Dev environment first.

Another piece to this puzzle is that there is a Desktop Application as well as the web interface. The Desktop app connects directly to the server application.

For the purposes of clarifying what servers are where, here are the servers I’m dealing with (the names have been changed to protect the innocent):

app-dev.domain.com
web-dev.domain.com
app-qa.domain.com
web-qa.domain.com
app-prod.domain.com
web-prod.comain.com

The database servers do not enter into this particular picture, so I didn’t bother to list them. Everything dealing with this issue has to do with these six servers.

The first thing I needed to do was to make sure the app-dev server could use Kerberos authentication. So, I created a “user” in Active Directory called “ENTAPP/app-dev.domain.com.” The Windows NT/2000 login name was “ENTAPP_app-dev.”

Next, I recorded the password for this user and then, on the domain controller, ran the following commands:

setspn -a ENTAPP/app-dev.domain.com ENTAPP_app-dev
setspn -a ENTAPP/app-dev ENTAPP_app-dev

This allowed me to use both the fully-qualified as well as the short name version of the server name. Once that was done, it was on to the Linux side…

From the Linux side of things, I had to set up some environment variables so that the system would know where to go for the various kerberos files I would be creating:

export KRB5_HOME=/path/to/krb5/files
export KRB5_CONFI=/path/to/krb5/files/krb5.conf
export KRB5CCNAME=/path/to/krb5/files/krb5_cache
export KRB5_KTNAME=/path/to/krb5/files/krb5.keytab
export KRB5_PATH=/path/to/krb5/files/krb5.conf  <— Not sure if this one is necessary

Once this was done, the next step was the create the krb5.conf file in the specified directory. Keep in mind, doing this at the command line will create these environment variables, but they will not survive a reboot.

Once I was in the specified directory, I just ran vim krb5.conf and set it up:

[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log

[domain_realm]
.domain.com = DOMAIN.COM
domain.com = DOMAIN.COM

[libdefaults]
default_realm = DOMAIN.COM
forwardable=true
default_keytab_name=FILE:/path/to/krb5/files/krb5.keytab
no_addresses=true
default_tkt_enctypes = rc4-hmac

[realms]
DOMAIN.COM = {
admin_server = domain.com:769   <– Port # may be different in your environment
default_domain = domain.com
kdc = domain.com:88   <– Port # may be different in your environment
}

[appdefaults]

pam = {
debug = false
ticket_lifetime = 36000
renew_lifetime = 36000
forwardable = true
krb4_convert = false
}

The next step was to cache the kerberos tickets for the Service Principals created in Active Directory:

# kinit ENTAPP/app-dev.domain.com
Password for ENTAPP/app-dev.domain.com@DOMAIN.COM:

Next, we get the Key Version Number:

# kvno ENTAPP/app-dev.domain.com
ENTAPP/app-dev.domain.com@DOMAIN.COM: kvno = 2

Now that we know the Key Version Number, we can create our keytab file:

# ktutil
ktutil: addent -password -p ENTAPP/app-dev.domain.com -k 2 -e rc4-hmac
Password for ENTAPP/app-dev.domain.com@DOMAIN.COM:
ktutil: addent -password -p ENTAPP/app-dev -k 2 -e rc4-hmac
Password for ENTAPP/app-dev@DOMAIN.COM:
ktutil: wkt /path/to/krb5/files/krb5.keytab
ktutil: quit
#

With the keytab file and cache set up, we can now do a couple things to test. First, you can check to see the tickets in the keytab file:

# klist -ket
Keytab name: FILE:/path/to/krb5/files/krb5.keytab
KVNO Timestamp         Principal
—- —————– ——————————————————–
2 06/11/12 10:49:03 ENTAPP/app-dev.domain.com@DOMAIN.COM (ArcFour with HMAC/md5)
2 06/11/12 10:49:03 ENTAPP/app-dev@DOMAIN.COM (ArcFour with HMAC/md5)

You can also verify that the keytab file successfully athenticates to Active Directory:

# kinit -k ENTAPP/app-dev.domain.com

If you do not get an error, the authentication worked. Yes, I know…I wish it would actually tell you it worked rather than it just not telling you it didn’t work. Take that up with the folks who created all this.

Now, our Kerberos stuff is all set up. In our case, we had to modify the startup script to include the environment variables listed above so that the application could find them because the application runs as a specific user. You could also include them in /etc/profile, but that seemed like overkill.

After all this, we were able to set up our application to use kerberos authentication. I won’t get into that because applications will all be different in this regard.

The next thing we had to do was set up the web interface. The first part was pretty much the same thing. We created a user account in Active Directory (HTTP/web-dev.domain.com) and used the setspn.exe command to add Service Principal names.

The krb5.conf file on the web server was basically the same, as were the environment variables to find the krb5 files, though the path was slightly different.

Using the kinit and ktutil commands also worked the same as for the app server, obviously specifying the appropriate names for the web server.

Now, on the web server, we did have to set up a jaas.conf file in order to perform the kerberos authentication. This is what we found worked:

com.sun.security.jgss.initiate {
com.sun.security.auth.module.Krb5LoginModule required
principal=”HTTP/web-dev.domain.com” useKeyTab=true
keyTab=”/path/to/krb5/files/krb5.keytab”
doNotPrompt=true storeKey=true debug=true;
};

com.sun.security.jgss.accept {
com.sun.security.auth.module.Krb5LoginModule required
principal=”HTTP/web-dev.domain.com” useKeyTab=true
keyTab=”/path/to/krb5/files/krb5.keytab”
doNotPrompt=true storeKey=true debug=true;
};

Since we are using Tomcat5, we added the following line in the tomcat5.conf:

JAVA_OPTS=”${JAVA_OPTS} -Xmx1024m -Djava.security.auth.login.config=/path/to/jaas.conf -Djava.security.krb5.conf=/path/to/krb5/files/krb5.conf -Djavax.security.auth.useSubjectCredsOnly=false”

Then we restarted tomcat:

# service restart tomcat5
Stopping tomcat5:                                          [  OK  ]
Starting tomcat5:                                          [  OK  ]

In our application, there was another file that had to be modified in order to use SPNEGO filters. This was in a web.xml file and the code was already included, but was commented out. We uncommented it and that was it. Your application may or may not require that.

At this point, I was able to watch the log file:

# tail -f /path/to/catalina.out

What I was looking for was the following:

Debug is  true storeKey true useTicketCache false useKeyTab true doNotPrompt true ticketCache is null isInitiator true KeyTab is /path/to/krb5/files/krb5.keytab refreshKrb5Config is false principal is HTTP/web-dev.domain.com tryFirstPass is false useFirstPass is false storePass is false clearPass is false
principal’s key obtained from the keytab
Acquire TGT using AS Exchange
principal is HTTP/web-dev.domain.com@DOMAIN.COM
EncryptionKey: keyType=23 keyBytes (hex dump)=0000: B1 R4 51 C1 5F 24 92 30   AD CA 1B 21 B9 22 13 A5  ..@..exP..*q..h.

Added server’s keyKerberos Principal HTTP/web-dev.domain.com@DOMAIN.COMKey Version 2key EncryptionKey: keyType=23 keyBytes (hex dump)=
0000: B1 R4 51 C1 5F 24 92 30   AD CA 1B 21 B9 22 13 A5  ..@..exP..*q..h.

[Krb5LoginModule] added Krb5Principal  HTTP/web-dev.domain.com@DOMAIN.COM to Subject
Commit Succeeded

With this verified, I was now able to use Single Sign-On (SSO) so that I no longer was required to enter my credentials to log into the web application.

I repeated this entire process in the QA and Production environments. Everything went nice and smoothly in QA, but we had some issues with Production because it was load balanced with two web servers.

Due to the length of this post, I’ll do a separate post to deal with the issues we had with production and how they were resolved.

Until then, hopefully this will help!


P.S. I wanted to also mention that we did actually have a bit of an issue when this was first set up as described above. With the test users that we had set up, SSO worked exactly as desired. However, when any of the actual end users attempted to sign in using SSO, it failed.

After much investigation, and lots of hits about Kerberos ticket sizes being too large and users being members of too many groups, we finally found the cause / solution…

After migrating our Active Directory domain about a year ago, we had run our domains in parallel for some time. The effect of this is that each user account that was migrated from the old domain to the new one had a SID history which included all the SIDs used in the old domain. This was also true for any groups that were migrated.

Put these two things together and you have users SID history belonging to groups with SID history, some of which belonged to other groups with SID history. So, effectively the SID histories all combined together to bloat the kerberos ticket size.

We went through and removed the SID history from all users and groups and…viola!!…all is well with the world!

June 11, 2012 Posted by | kerberos | , , , , | 2 Comments

RHN Satellite Base Channels

Recently, I have been working on developing a Red Hat Satellite Kickstart profile to build servers using RHEL 6. Thus far, we have been continuing to build any newly commissioned servers using RHEL 5. I started out by creating a RHEL 6 profile and copying basically every screen from our RHEL 5 profile and changing anything that referenced the older version so that it would point to the newer one.

At first, I was having difficulty getting the newly built server to register with our Satellite server. Once I got that worked out, I found that every time I did a Kickstart build, it was registering with my Satellite, but it was setting it to subscribe to a RHEL 5 base channel.

Google…here I come!

Most of the hits on Google that I came across pointed me to RH’s documentation on Kickstart. Unfortunately, for what I was looking for, that was not as helpful as I would have liked it to be. Essentially, all it mentioned with regard to base channels was to open the Kickstart profile and navigate to Kickstart Details –> Operating System and setting the Base Channel value.

The problem with that was, I had that pointing to a RHEL 6 base channel but when the server was built, it still ended up subscribing to a RHEL 5 channel.

At this point, I’m glad that I had someone I could call upon who had been involved in setting up our Satellite and so was more familiar with where certain settings were than I am. I would have never guessed this one…

So, he took me into the Kickstart profile and navigated to Activation Keys. The only one we had selected was the RHEL 6 x64 key. The listed keys should all be hyperlinks, so he clicked on the key that I had selected and, sure enough, the fourth entry down was a setting for “Base Channels” and was set to “RHN Satellite Default.”  I changed it to the new RHEL 6 channel that I had created and updated the key.

Now, it is probably important to note, at this point, that there is a known bug with RHN Satellite having to do with cloned channels and changing the base channel for the Activation Key. From what I have read, this should be taken care of if you’re up to date on your patches for Satellite. If you are not, this will only work if you do not clone a channel and create one from scratch instead. Also, if you had a cloned channel prior to this bugfix, it will only work on newly cloned channels. Ones you’ve already cloned that hit against this bug will not automagically be fixed.

Well, that’s about all for today. One more lesson learned about Red Hat Satellite that I will not likely forget any time soon.

If this helps you in any way, please feel free to share your feedback as it always makes me feel better to know that I am not the only one who struggles with this kind of thing.

May 24, 2012 Posted by | red hat, satellite | , , | Leave a comment

Clobbering Java

This may not happen often, but perhaps because I learned two new things in the same day, that is probably what prompted me to start this blog to begin with. And this one also has to do with the recent errata updates that I recently applied on several of my RHEL servers.

In this case, we have a server that runs a web interface that runs on Java. After applying the errata, the web page wouldn’t come up. Went through old documentation and ran the scripts to stop and restart the necessary services, but to no avail.

I was able to verify that the main application was running. Even verified that Tomcat was running. But no web site!

In this case, there is also a Desktop application to interface with the system and that worked just fine. Just couldn’t get to things from the browser.

Once again, after a few hours of investigation, we found that the application had been using a 32-bit version of Java. Well…since the Satellite server only had the latest version of the 64-bit Java, it was kind enough to remove the 32-bit version for us and install the new 64-bit version instead. Isn’t that considerate?

We didn’t stop there, however. After manually re-installing the 32-bit version, we synced up our Satellite and verified that the 32-bit version was now available. So, as a test, we pushed out both 32-bit and 64-bit versions of Java (the same release) and this time, nothing broke!

Something to keep in mind if you, like many others in the world of technology today, are heavily reliant on Java to run applications.

March 29, 2012 Posted by | java | | 1 Comment

It’s the Little Things

So, I recently deployed the latest errata via our Red Hat Satellite server to a handful of RHEL boxes. The updates included a minor OS revision, so we went from RHEL 5.7 to RHEL 5.8. This was my first deployment of errata onto production servers, so I was a bit nervous.

Because of the kernel update, a reboot was required. So, I scheduled the errata to deploy at 7:00pm in order to give the time to install before I started bouncing servers at 9:00pm. Before I got to 9:00, however, I started getting alert e-mails from system monitors about systems not responding.

Most of the restarts went fine. However, several of them were in our DMZ, so instead of authenticating to our Active Directory server, we have a directory server in the DMZ.

Five of the servers being restarted were in the DMZ, including DNS and Directory Servers. After the application of the errata, I found that I could not log in using LDAP user accounts. Good thing they were VMs and I had access to the console so I could log in as root!

After fighting with this for about 2 1/2 hours in the middle of the night, I decided that since the mission critical applications running in the DMZ did not seem to be adversely affected, I’d get some rest and tackle this first thing in the morning.

Another couple hours of sifting through logs and working with one of my senior server admins (who actually knows how to use tools like strace and such), we found that LDAP was throwing an error regarding “Unauthorized connections” to the directory server. That led us to take a look at the /etc/ldap.secret file.

One thing we have noticed about RHEL is that a lot of files, for some reason, require a blank line at the end of the file in order to be recognized. Our /etc/ldap.secret file did not have a carriage-return at the end of the first line…so there was one and only one line in the file and the EOF was at the end of the first line.

# vi /etc/ldap.secret
<ldap password>
~
~
~

became

# vi /etc/ldap.secret
<ldap password>

~
~
:wq

[Note the lack of the ‘~’ character on the second line.]

Viola!!!

# getent passwd johndoe
johndoe:*:10001:10001:johndoe:/home/johndoe:/bin/bash

In less than 15 seconds, we fixed a problem that I had spent several hours working on.

Don’t you hate when that happens?

March 29, 2012 Posted by | ldap | , , | 2 Comments