In early May, we mentioned that we would be updating our DNS records to reflect support for
SPF (Sender Policy Framework), in attempt to circumvent future spam, and also work together
with other providers and users who rely on SPF.
However, our findings were somewhat inconclusive; a few different Parodius users informed us
that Emails to themselves were on the verge of being marked as spam (by SpamAssassin). As it
turned out, these mails were actually being given a very high score due to the SPF lookup being
done by SA. For some reason, our SPF setup "wasn't working right"... except that the evidence
being presented to us made no sense -- everything was, in fact, how it should be.
We took the time to ask some of the more clueful individuals on the spf-users mailing list, in
hopes that someone there could inform us as to what the mistake was. For further details,
see our thread.
The users were not very clueful at all, and there was a lot of speculation as to our OUTGOING
mail being passed through SpamAssassin (which is in no way shape or form being done, nor is it
even possible with our setup). Language barriers also became a major problem (which is odd,
since all SPF documentation and details are in English). Finally, no one managed to shed
any light as to what was really going on, despite all evidence presented.
Since we can't accept such flaws in technology, our SPF records have been removed from our
DNS zones, and will not be put back until someone takes the time to explain exactly what's
going on.
For now, it seems the SPF relies on some incredibly inane assumptions about server
configuration -- from what we've seen, it's as if SPF expects you to have a machine
physically named and dedicated to handling SMTP traffic. Systems using IP aliases seem to
fall victim to strange assumptions being made by the SPF; something somewhere is
making the assumption that the IP of whatever is handling the SMTP traffic should resolve
to the same name as whatever gethostname(3) returns. If this is indeed done within SPF
detection systems (or possibly related to sendmail; who knows!), this is a VERY bad
assumption, and will eventually be noticed + discussed by other system administrators.
Recently we at Parodius have become somewhat disappointed by
Weblaunching,
our present registrar, due to changes to their domain management system and strange integrations with other
registrys such as Enom (we've been trying out their system as well; similar experiences). Due to this, we
decided to look at other
OpenSRS-sanctioned
registrars to see who else was available... and we came upon
SpyProductions.
While filing to transfer one of our domains (used solely as a web and hosting sandbox) to SpyProductions, we
encountered quite a few "interesting" -- and downright insecure -- aspects of their transfer and billing processes:
- Login authentication is done using HTTP, not HTTP with SSL -- meaning, your login/password credentials are
being sent over the Internet in plain-text.
- Domain transfers
are done using HTTP, not HTTP with SSL -- meaning, all billing information is being sent over the Internet
in plain-text. This includes your billing information, credit card number, and CVN.
- In addition, transfers use HTTP GET, where all contents of the form fields are placed into the URL for
extraction. The side-effect of this is that your browser now has a page cached on your local hard disk which
contains all of your billing information, including your CC and CVN. Using HTTP POST (with PHP sessions for the
sensitive information) would be better.
- An SSL-based method of contact was found via their
"make contact"
link, under
"Secure Contact Form".
The certificate used hasn't been signed by a valid CA (choose View Certificate); instead, SpyProductions signed
their own certificate, making it completely worthless as far as security goes. I guess they felt
paying US$49
was unreasonable; I mean, who really needs a legitimate CA to sign their SSL cert? ;-)
Using Google, it was interesting to note that this registrar has had a history of being involved in legal
battles where customers of theirs have induced legal situations by attempting to perform shady activities,
such as registering domains like cocacola.info and other nonsense. Admittedly, this isn't
up to the registrar to handle, but SpyProductions looks to be a one-man operation (you can find the owners'
blog online).
He seems like a decent enough fellow, but regardless of that fact, I wouldn't bother registering a domain
with them -- or if you already have, consider closing your account with them and getting your CC number changed.
All of the above is an accident waiting to happen...
Parodius is now publishing
Sender ID SPF
records. Our SPF records are presently using SOFTFAIL (~all ); this means that mail
which does not pass SPF tests will be marked as "potentially" being sent from an invalid sender, but does not
induce a 100% failure. We are using SOFTFAIL "just in case" things don't work correctly.
Our SPF records presently do not apply to "subdomains" (i.e. foobar.parodius.com). In addition, our SMTP
servers are now configured to do SPF lookups as well.
Important notes for Parodius users:
- Individuals sending mail through their ISP's mail servers with a
[email protected] address
may find that some mail may get rejected by Internet mail servers using SPF. These individuals
should contact us to configure sending mail for [email protected] through our mail servers
instead.
- Individuals bouncing (forwarding without changing headers) mail without changing the
From:
header line to match their own address may find that such mail may get rejected by Internet mail
servers using SPF. This is a
known limitation
of SPF (the link content refers to bouncing as "forwarding", and forwarding as "remailing"). Users should
configure their mail client to change the From: line, or do it manually, before bouncing mail.
In the future, we will likely adopt the
SRS
model, which should address this issue.
The previously schedule maintenance has been completed.
Things went as planned, except for one small detail: it seems that on FreeBSD 5.x, MySQL (4.x and 5.x) will
SIGSEGV upon a network connection (but not a UNIX domain socket) unless you have CPUTYPE defined (preferrably in
/etc/make.conf). This lead us on a wild goose chase for nearly 4 hours.
Wow, has it really been almost 7 months since our last posting? Eeek!
This coming Monday (July 19th), we will be de-commissioning gabriel, one of our back-end
servers (secondary DNS and primary SQL server). gabriel will be replaced with a completely
different server, with completely brand-new hardware. Here's a comparison:
| Old server (gabriel) | New server (medusa) |
MB/Chassis | Tyan Tiger 200T; 1U custom | SuperMicro SuperServer 5013C-T; 1U |
OS | FreeBSD 5.2.1 | FreeBSD 5.2.1 |
CPU | Dual 800MHz Pentium III | Single 2.6GHz Pentium 4 w/ HTT |
RAM | 256MB PC133 | 1024MB DDR-533 |
Disk | 80GB ATA133; 2MB cache | 120GB SATA; 8MB cache; RAID capable |
Network | Dual 10/100mbit Intel 82559 | Dual 10/100/1000mbit Intel 82541EI |
IRQ method | SMP w/out ACPI | SMP w/ ACPI |
Work will likely be done sometime in the evening (PDT/PST). Impact will be noticable;
sites may seem to "randomly" fail upon DNS resolution, and sites that rely heavily on SQL will
have definite impact.
The old server, either entirely or pieces of which, will be sold to those who are interested
in the parts; the remains will be sold on eBay. And yes, the hard disk will be completely
erased (low-level format + zeros) to ensure protection of any sensitive data.
If you have any questions or comments, drop me an Email.
We successfully relocated our equipment on January 20th starting
at 20:00; we didn't complete relocation until nearly 01:00 the
following morning. Many thanks to
One-Thumb, Inc.
for letting us stay in their cage for as long as we did. Don, you
rock.
As many of you might have noticed, there were a couple snags along
the way. Aside from our new provider giving us the wrong netblock
information (which got sorted out within 5 minutes; no big deal
there), our secondary DNS server / SQL server / backup server decided
to go belly-up during a BIOS upgrade, resulting in the machine being
completely unusable.
The problem with the backup server being offline was two-fold; part
of our maintenance involved removing two hard disks in our primary
(web) server due to bad blocks. We did backups of everything moments
before taking the servers down for maintenance, which was a good thing.
Once the backup servers' BIOS failed, we had no choice but to take the
drive out and place it into a working machine so that everyone's data
could be restored. Due to the late hour, we decided to simply bring
the backup server home with us and restore data over the network. This
proved to be time consuming, due to having a 384kbit ADSL link.
Simultaneously we purchased a replacement BIOS from
BIOSMAN, who proved to be quick
and reliable. The backup server was up and working less than 24 hours
later, although it still has yet to be re-deployed at the co-lo.
The full restoration (about 6GBytes of data) took approximately
38.5 hours, and was 100% successful.
The only pending issue is the lack-of a secondary DNS server, which
does pose a problem. We will be fixing this in the next couple of days,
so if you find that visiting our hosted sites seems to take longer
than usual (for the connection to be made, not for the page to render),
this is why.
We will also likely be doing maintenance again in the near future,
replacing our primary server with a
SuperMicro 5013C-T
box. But, this depends on whether or not this hardware is fully compatible
with FreeBSD.
"Thank you" goes out to every visitor of hosted sites; your patience
paid off.
The previously-mentioned maintenance will occur during the evening of
January 20, 2004, starting at 19:00 PST. It will likely take
between one and two hours, minus DNS propagation time.
This is a set date, and bar death or plague, will not be changed.
If you have any questions, drop me a line.
The aforementioned maintenance we've scheduled has been put on hold for a
short while, as our contact for getting in to our old co-location cage
has fallen sick to the black plague (or something of that nature). We
still need to schedule a visit with him as our liason.
We'll be posting a reminder as to when the maintenance will actually
occur in the near future.
All good things must come to an end, and from that ending, better things are
to come. Sadly, our previous co-location provider,
Anarchy Solutions,
has decided to close up shop for a couple different reasons, all completely
honourable and noble. Our hats are off to Karl for providing us great service
and reliability throughout the past year or so: we couldn't have been happier.
In lieu of this scenario, we've taken up a co-location deal with the actual
co-location facility themselves:
Hurricane Electric. We'll remain at the
same physical facility located out of
Fremont, California.
There are many pros and cons to this deal, but mostly pros: we will have
more rack space for our equipment and future servers and direct access to our
cage 24x7x365. Our bandwidth and network speed will remain the same. On
the other hand, our monthly co-location fee will go up US$75.00/month,
but this isn't too big of a deal. Hosted users will likely NOT be
subjected to fees. We still want to keep this as a free service above
all else.
One of the issues which will be user and visitor-impacting is the fact that
we will be moved to a different co-location cage, and we will be re-IP'ing
our equipment. This will cause a full site-wide outage for literally every
site we host until our DNS can get updated. This usually takes 24-48 hours,
but can take up to as much as a week to get fully propagated across the entire
Internet.
We will also take the opportunity to replace some equipment in our servers
(in particular, the two main data storage drives we use, which have gathered
some bad blocks over the years), as well as upgrade some of our equipment to
hardware that's more reliable and scalable. In particular, a 24-port switch
to replace our 12-port, and a new power management unit. Fun admin toys...
We plan to take care of all of this, at the latest, by January 11, 2004.
Time is of the essence, and we apologise for the short notice (we actually
had plans to take care of this on the 5th, but there were some snags with
the delegation of the service order which needed to be worked out).
Impacting downtime is hard to estimate due to all of the above. It may be
a few hours or it may be a day, depending on the turn of events. Our
actual co-location visit should last 2-3 hours at most, but will require
many hours of remote administrative work once our systems are up and usable
from an admin's point of view. And yes, we will be doing full file-system
backups shortly prior to our maintenence window, so don't be too worried.
If you have any questions, please feel free to drop me an Email. I'll be
more than happy to provide answers.
I know this is a bit of a long shot, but my attempts have been relatively fruitless thus far. I am searching for any and all IRC logs recorded from EFnet #emu throughout its history. Many people associated with parodius were also associated with the channel, and I want all the logs you can find! Traffic from #gbinfow, #snesemu, or #emuhaven would also be welcome, of course...
If you can help, feel free to attach textfiles(compressed or not) in an email to [email protected]. Thanks in advance.
Maintenance has been completed successfully. If
you encounter any problems, please let us know.
We will be doing some server maintenance tomorrow
morning, rebuilding some binaries and performing some
general upgrades. This will be performed around 01:00
on November 30th.
Estimated downtime is about 30 minutes, but may be
less depending upon any issues we encounter.
The aforementioned maintenance has been completed
successfully.
We ran into a few snags pertaining to some recent
commits to the FreeBSD 4-STABLE tree, but managed to
work around them without too much trouble. The issues
were corrected in a more recent version of the CVS tree.
Naturally, SpamAssassin has been successfully deployed
for individuals who wanted it (so far, just myself and
one other user), and it seems to be working quite
well.
We will be doing the aforementioned SpamAssassin + perl
modifications tomorrow, November 1st, 2003. The
work will be performed during the afternoon/early evening
(PST).
During this time the web server will be offline and users'
account may be unavailable. Visitors also may find that
the service will be intermittent, depending on the work we
are performing at the time.
We'll also be taking this time to do some unrelated work,
particularly in regards to updating some outdated software
we rely on, as well as updating our web server software.
In the next couple of weeks, we will be making some system-wide
changes involving perl, a commonly-used piece of software here
at Parodius. perl is used throughout numerous sites we host as
a CGI interface language, as well as by some of our system
scripts and configuration utilities. The change will
(hopefully) be transparent to end-users; the webserver will
be disabled during the time we are doing work, to ensure that
no mishaps occur.
The operating system we use comes with perl 5.005_03 installed
by default, without a very "clean" way to remove or upgrade
it. Presently, perl is at version 5.8.0, which includes numerous
fixes, threading support, as well as many updated modules and
features.
Upgrading from one version to the other is not exactly a
"walk in the park." It takes about an hour of administrative
work and cannot be done automatically. MANY things need to be
rebuilt/recompiled, and much investigation needs to be done to
confirm that all "old" perl libraries and modules get removed.
We have successfully deployed this upgrade on our SQL server,
which now runs perl 5.8.0. We believe it is quite possible
to upgrade our main systems as well.
Users may be asking, "Why upgrade at all?". The reason
for the upgrade is to provide not only a more functional and
more stable version of perl (which now includes threading),
but also to assist in deployment of a new anti-spam package
called SpamAssassin.
Presently we rely on MAPS RBL+ and Spamhaus to do our spam
detection for us, but these methods do not provide the end-user
a way to determine what's spam and what is not.
SpamAssassin uses a complex way to detect and mark incoming
Email to your account as spam, so that you may filter it out
(delete it) with ease, or know what is spam and WHY it is
considered spam. Messages marked as spam will contain a
very specific mail header, and for your convenience, will also
contain a subject line starting with [SPAM] .
We will not be deploying SpamAssassin on every account
by default, due to the implementation limitations of SpamAssassin
and paranoia on our part. We would like users to request
their account have this feature enabled, if desired.
Also, Email that is considered spam will NOT be automatically
deleted. This is to ensure that if the system erroneously
marks an Email you get as spam, that you have way to read it
regardless, as well as provide it to our administrative staff
to find out why it was deemed as spam, and make appropriate
changes to your account to ensure it does not happen again in
the future.
Finally: we will not be removing our support for MAPS RBL+ nor
Spamhaus. These two methodologies will also remain in place,
as they presently work quite well for blocking ISPs who do not
wish to properly administrate customers of theirs who push out
bulk unsolicited Email.
We're not sure at this point when the downtime will occur for
this upgrade, but we will be sure to let you know via our
homepage at least 24 hours in advance.
If you are still curious about SpamAssassin and other anti-spam
techniques we use, feel free to mail our administrative staff
with inquiries, or you can drop me an Email personally as well.
Rest in peace, Johnny.
You will forever live on in the hearts of us all.
PS: the pic of the day
We will be doing maintenance tomorrow morning, due to massive changes
deployed in the FreeBSD ports collection, as well as in the latest -STABLE
code.
During this time, common services may be up and down depending upon what
it is we're doing. Services include our web server, FTP server, mail server
(both incoming (SMTP) and outgoing (POP3)), and ssh daemon.
We will begin work at 00:00 PDT (29th), and will likely finish around
02:00 PDT.
If you have any questions about what we're doing or how this will affect
you, feel free to Email me and I'll do my best to provide you with a more
detailed prognosis.
I've gone and done the impossible: re-written the entire main web interface
to be XHTML 1.1 and CSS2 compliant. "Wow, and when you hear it, you'll
be like, wow, I don't believe!". Hopefully this will shut Inverse up.
;-)
In lieu of the change, I've also took the liberty of removing all of our
old stagnant posts (sorry sl1me, don't pee in my Corn Flakes later, okay?)
in hopes that new ones will arise.
Some readers may be wondering about my post back in November regarding
server upgrades. The SuperMicro box I spoke of didn't exactly work out
(two main reasons: lack-of ECC support and absolutely horrible internal
cable layouts which posed the risk of data loss).
Instead, I've decided to look into more "professional" solutions;
specifically, Xeon-based systems using the Intel E7501 chipset (VIA and AMD
have failed too much recently, and ServerWorks just got slammed with the
latest hardware flaw), support hot-swappable SCA SCSI-3 drives (probably
using RAID 1), BIOS-level serial console, 2GB ECC RAM, hardware monitoring
(still hard to come by under FreeBSD), and a bunch of other fun features.
We'll also be moving away from SMP and going with single CPUs (the benefits
of SMP just aren't worth the cost; and with Intel's HyperThreading, I think
we'll get great (if not fantastic) performance out of a single CPU system,
while ensuring SMP doesn't cause any problems). Oh, and yes -- it'll be
expensive.
Parodius has been around since the early 90s. Our servers, time and time
again, have been upgraded and maintained with a consistent degree of
integrity and reliability. *knock on wood* Our current boxes have been
through some pretty tough times, especially ones involving extreme
environmental temperatures! The only pieces of hardware to fail have been
our IDE disks (our IBM SCSI disk is over 3 years old, with zero grown
defects), which should come as no surprise since IDE drives are so
cheaply made. Very depressing.
The rest of our infrastructure is still alive and kicking. Our Portmaster 2
still works great for serial console, our remote power cycler/rebooter
will soon be replaced with an APC unit which does SNMP and has a much
better administrative interface, and our HP switch will be also be replaced
with a 24-port model. We've even gone as far as to do tri-weekly backups of
our filesystems to ensure that if there is a drive failure, we'll be able to
get back online with minimal data loss. I still recommend to end-users
that they keep local back-ups of their own data, though, just in case.
On the "Web" side of things, I'm still working on our monitoring system,
which is slowly coming together in my mind; the code has yet to be written,
and with a full-time job taking up most of my time, I can't really say when
everything will be implemented. Users should know that I do have plans on
making your virtual host logs available to you, with full graphs of bandwidth
usage and all, automatically on a daily basis -- numerous individuals have
requested this, and I want to Do It Right™ rather than just throw together
a quick hack. I've also taken a great interest in XML and SOAP; too bad
most of these technologies are extremely overrated, and serve very niche
roles in regards to what you can accomplish with them.
I do sincerely apologise for those users with virtual hosts that are being
impacted by bandwidth limitations; sorry guys, I just can't remove the cap
due to potential financial restrictions (re: bandwidth usage). If it's
really hurting your site (lots of timeouts), drop me an Email and I'll work
out a solution with you.
Again, a big thanks to
Anarchy Solutions
for offering us (commercial) co-location over at Hurricane Electric in
Fremont. Without them, we would have had to close shop...
As usual, I'll do my best to keep everyone informed as to any downtime and
changes we plan on making. In the meantime, here's to keeping the spirit
of Parodius alive. *holds up a pint of Boddingtons*
Seven months without a news update. Normally, I would be proud of this
supreme level of inactivity, as our generation's defining characteristic
is, of course, our profound apathy. But I just have to break the silence:
Happy 4th of July. Spend some time with the ones you love and reflect
on the good times when you weren't 9 to 5'ing it. And if current events
bring you down, consider the following:
Giving money and power to government is like giving whiskey and car keys
to teenage boys. - P.J. O'Rourke
Those of you who use FreeBSD may be witnessing an ongoing problem with the
cvsup mirrors in regards to src/contrib/gcc/INSTALL constantly
getting deleted, or reporting that it cannot be deleted. In our case, we're
seeing similar problems but with src/contrib/gcc/INSTA -- yes,
the "LL" seems to be missing, for reasons unknown.
The author of CVSup, John Polstra,
made a post on the FreeBSD mailing lists
regarding this problem, and how to fix it.
I promptly Emailed Jon a series of questions pertaining to this situation. I
believe it to be one which has happened numerous times in the past
(jakarta-tomcat anyone?), and is still haunting BSD users to this day. I
was hoping to get some answers from him, and find out why things are the way
they are -- but more importantly, why they still haven't been
addressed.
If you're looking for a "quick fix" which takes up lots of bandwidth, simply
do the following as root:
# rm -fr /usr/src/*
# rm -fr /usr/sup/*
# cvsup -L2 -g -h cvsup9.freebsd.org /your/cvsup/config/file
The "key" is removing /usr/sup.
|