This is a general warning for anyone who owns or manages multiple internet domains.
If you have ever used a domain name, and switched to another one, you should never let the original domain expire.
The reason is, there are nefarious people out there who will buy that abandoned domain, put up content they scraped from an internet archive, and add links to malware or scam sites.
This happened to Ginny’s church’s web site (which I was hosting). They had a domain that had been used but was abandoned in favor of a new domain. Because the old domain wasn’t being used, and all the content had been migrated over, we decided to save some money and stop paying for it.
About a year after that, the domain popped up on the internet as a local church. I was confused by this and looked at the site. It was an almost exact copy of the last version that had been published … except it had links that tried to redirect people to scam sites.
The only thing we could do is file a complaint with the domain registrar. I’m not sure if they ever took it down.
Now, if I have a domain that I had used, I keep paying for it and just set it to do a permanent redirect to the new domain.
The only real exception to this is when I bought a domain with the intention of using it, but never got around to it. In those cases, I just let it lapse, because it’s not going to have content to mimic.
This is a general warning about using the CHGCMDDFT command to change the default value of command parameters for commands in QSYS.
There are a number of reasons not to change parameter defaults on commands in QSYS…
Any time you upgrade your systems, those default parameter changes will be lost because the commands are completely replaced.
Although you can identify what commands have had parameter defaults changed, there is no indication of WHAT parameter defaults were changed on a command. To identify what commands have had defaults changed display the command objects description (DSPOBJD) with DETAIL(*SERVICE). If an object has had the parameter defaults changed, the APAR ID will show ‘CHGDFT’.
Third party products may be expecting commands to have the IBM provided default values. Because 3rd party products usually have to work on a number of IBM i versions, it’s impossible for vendors to specify a specific value for every parameter. New parameters are added with almost every release.
A Better Approach
A better approach would be to create a library to hold copies of the commands you want to modify …
Create a specific library to hold customized commands
Add that library to the QSYSLIBL system value above QSYS
Duplicate the *CMD objects into that library
Change the default parameter values on the commands in that library.
This way the commands in QSYS are left with the IBM provided default parameter values. Since the custom command library is above QSYS in the system library list, applications that reference those command (that don’t qualify the command to QSYS), will use the modified command.
I like to create a simple CL program that does the work of deleting existing commands from the custom command library, duplicate the command from QSYS, and modifies the command parameter defaults. Not only does this make it easy to recreate the custom command parameter defaults when you do an OS upgrade, it documents what parameter defaults have been made.
You may be tempted to use the CRTPRXCMD to create a ‘proxy command’, that points to the original command, and change the defaults on the proxy command.
DO NOT DO THAT!
A proxy command isn’t a stand alone object that is independent of the actual command. It’s just a pointer to the real command.
Any real changes you make to a proxy command will actually be made on the real command.
Here’s a very simple example of a CL that will repopulate a custom command library with modified default parameter values.
The folks at Amazon Lightsail have added a new, much needed, feature: Automatic snapshots.
Shapshots are a way of creating an exact backup of your Lightsail instance. You can use this snapshot to move the instance to another region, move it to the more flexible EC2 platform, or just create a new instance based on an existing one.
Previously, the only way to automate snapshots was to create AWS Lambda functions with Cloudwatch triggers. I was able to get that setup, but it took quite a while.
Many of us who manage websites are familiar with Google’s ‘Search Console‘. The search console is a way for webmasters to manage how Google interacts with our web sites. It provides functions to tell Google what parts of the site to search, what parts to ignore, and determine what pages are doing better than others.
One of the functions it provides is a way to see what parts of a web site that Google has indexed and what part it hasn’t. It also can tell what parts of a site it is ignoring and, to a certian extent, why it’s ignoring them.
One of the reasons that Google might be ignoring a page is because it’s been to be determined to be a ‘Soft 404’.
What’s a Soft 404 error?
Well, a REAL 404 error is a page not found. It’s a function of the web server software. Most web servers provide the ability to use a custom page when a 404 error is encountered. You can see an example of one here.
As for a ‘Soft 404’ … according to Google …
A soft 404 means that a URL on your site returns a page telling the user that the page does not exist and also a 200-level (success) code to the browser.
While some sites might actually do that … handle a page not found error with a friendly page but indicate to the browser that it’s a normal page (200 status code) … I suspect it’s actually a minority of sites (granted, it may be a way to game the system).
However … it turns out that pages that contain the words ‘not found’, ‘error’, ‘authorized’, ‘not allowed’, etc., in the title or body are often treated by Google as a soft 404 error … even if the page isn’t a 404 at all. Additionally, the words do not even need to appear on the page at all. The details of what constitutes a ‘soft 404’ are very mysterious.