"... It tells me that the bottom line is that Christmas has become a harder season for White families. We are worse off because of BOTH social and economic liberalism which has only benefited an elite few. The bottom half of the White population is now in total disarray – drug addiction, demoralization, divorce, suicide, abortion, atomization, stagnant wages, declining household income and investments – and this dysfunction is creeping up the social ladder. The worst thing we can do is step on the accelerator. ..."
As we move into 2018, I am swinging away from the Republicans. I don't support the Paul Ryan
"Better Way" agenda. I don't support neoliberal economics. I think we have been going in the
wrong direction since the 1970s and don't want to continue going down this road.
Opioid Deaths: As we all know, the opioid epidemic has become a national crisis and the White working class
has been hit the hardest by it. It is a "sea of despair" out there.
White Mortality: As the family crumbles, religion recedes in his life, and his job prospects dwindle, the
middle aged White working class man is turning to drugs, alcohol and suicide: The White suicide
rate has soared since 2000:
Median Household Income: The average household in the United States is poorer in 2017 than it was in 1997:
Real GDP: Since the late 1990s, real GDP and real median household income have parted
ways:
Productivity and Real Wages: Since the 1970s, the minimum wage has parted ways with
productivity gains in the US economy:
Stock Market: Since 2000, the stock market has soared, but 10% of Americans own 80% of
stocks. The top 1% owns 38% of stocks. In 2007, 3/4th of middle class households were invested
in the stock market, but now only 50% are investors. Overall, 52% of Americans now own stocks,
which is down from 65%. The average American has less than $1,000 in their combined checking
and savings accounts.
Do you know what this tells me?
It tells me that the bottom line is that Christmas has become a harder season for White
families. We are worse off because of BOTH social and economic liberalism which has only
benefited an elite few. The bottom half of the White population is now in total disarray
– drug addiction, demoralization, divorce, suicide, abortion, atomization, stagnant
wages, declining household income and investments – and this dysfunction is creeping up
the social ladder. The worst thing we can do is step on the accelerator.
Paul Ryan and his fellow conservatives look at this and conclude we need MORE freedom. We
need lower taxes, more free trade, more deregulation, weaker unions, more immigration and less
social safety net spending. He wants to follow up tax reform with entitlement reform in 2018. I
can't but see how this is going to make an already bad situation for the White working class
even worse.
I'm not rightwing in the sense that these people are. I think their policies are harmful to
the nation. I don't think they feel any sense of duty and obligation to the working class like
we do. They believe in liberal abstractions and make an Ayn Rand fetish out of freedom whereas
we feel a sense of solidarity with them grounded in race, ethnicity and culture which tempers
class division. We recoil at the evisceration of the social fabric whereas conservatives
celebrate this blind march toward plutocracy.
Do the wealthy need to own a greater share of the stock market? Do they need to own a
greater share of our national wealth? Do we need to loosen up morals and the labor market? Do
we need more White children growing up in financially stressed, broken homes on Christmas? Is
the greatest problem facing the nation spending on anti-poverty programs? Paul Ryan and the
True Cons think so.
Yeah, I don't think so. I also think it is a good thing right now that we aren't associated
with the mainstream Right. In the long run, I bet this will pay off for us. I predict this
platform they have been standing on for decades now, which they call the conservative base, is
going to implode on them. Donald Trump was only the first sign that Atlas is about to
shrug.
(Republished from Occidental Dissent by permission of author or representative)
At 5:30 every morning, Tony Gwiazdowski rolls out of bed, brews a pot of coffee and carefully arranges his laptop, cell phone
and notepad like silverware across the kitchen table.
And then he waits.
Gwiazdowski, 57, has been waiting for 16 months. Since losing his job as a transportation sales manager in February 2009, he wakes
each morning to the sobering reminder that, yes, he is still unemployed. So he pushes aside the fatigue, throws on some clothes and
sends out another flurry of resumes and cheery cover letters.
But most days go by without a single phone call. And around sundown, when he hears his neighbors returning home from work, Gwiazdowski
-- the former mayor of Hillsborough -- can't help but allow himself one tiny sigh of resignation.
"You sit there and you wonder, 'What am I doing wrong?'" said Gwiazdowski, who finds companionship in his 2-year-old golden retriever,
Charlie, until his wife returns from work.
"The worst moment is at the end of the day when it's 4:30 and you did everything you could, and the phone hasn't rung, the e-mails
haven't come through."
Gwiazdowski is one of a growing number of chronically unemployed workers in New Jersey and across the country who are struggling
to get through what is becoming one long, jobless nightmare -- even as the rest of the economy has begun to show signs of recovery.
Nationwide, 46 percent of the unemployed -- 6.7 million Americans -- have been without work for at least half a year, by far the
highest percentage recorded since the U.S. Labor Department began tracking the data in 1948.
In New Jersey, nearly 40 percent of the 416,000 unemployed workers last year fit that profile, up from about 20 percent in previous
years, according to the department, which provides only annual breakdowns for individual states. Most of them were unemployed for
more than a year.
But the repercussions of chronic unemployment go beyond the loss of a paycheck or the realization that one might never find the
same kind of job again. For many, the sinking feeling of joblessness -- with no end in sight -- can take a psychological toll, experts
say.
Across the state, mental health crisis units saw a 20 percent increase in demand last year as more residents reported suffering
from unemployment-related stress, according to the New Jersey Association of Mental Health Agencies.
"The longer the unemployment continues, the more impact it will have on their personal lives and mental health," said Shauna Moses,
the association's associate executive director. "There's stress in the marriage, with the kids, other family members, with friends."
And while a few continue to cling to optimism, even the toughest admit there are moments of despair: Fear of never finding work,
envy of employed friends and embarassment at having to tell acquaintances that, nope, still no luck.
"When they say, 'Hi Mayor,' I don't tell a lot of people I'm out of work -- I say I'm semi-retired," said Gwiazdowski, who maxed
out on unemployment benefits several months ago.
"They might think, 'Gee, what's wrong with him? Why can't he get a job?' It's a long story and maybe people really don't care
and now they want to get away from you."
SECOND TIME AROUND
Lynn Kafalas has been there before, too. After losing her computer training job in 2000, the East Hanover resident took four agonizing
years to find new work -- by then, she had refashioned herself into a web designer.
That not-too-distant experience is why Kafalas, 52, who was laid off again eight months ago, grows uneasier with each passing
day. Already, some of her old demons have returned, like loneliness, self-doubt and, worst of all, insomnia. At night, her mind races
to dissect the latest interview: What went wrong? What else should she be doing? And why won't even Barnes & Noble hire her?
"It's like putting a stopper on my life -- I can't move on," said Kafalas, who has given up karate lessons, vacations and regular
outings with friends. "Everything is about the interviews."
And while most of her friends have been supportive, a few have hinted to her that she is doing something wrong, or not doing enough.
The remarks always hit Kafalas with a pang.
In a recent study, researchers at Rutgers University found that the chronically unemployed are prone to high levels of stress,
anxiety, depression, loneliness and even substance abuse, which take a toll on their self-esteem and personal relationships.
"They're the forgotten group," said Carl Van Horn, director of the John J. Heldrich Center for Workforce Development at Rutgers,
and a co-author of the report. "And the longer you are unemployed, the less likely you are to get a job."
Of the 900 unemployed workers first interviewed last August for the study, only one in 10 landed full-time work by March of this
year, and only half of those lucky few expressed satisfaction with their new jobs. Another one in 10 simply gave up searching.
Among those who were still unemployed, many struggled to make ends meet by borrowing from friends or family, turning to government
food stamps and forgoing health care, according to the study.
More than half said they avoided all social contact, while slightly less than half said they had lost touch with close friends.
Six in 10 said they had problems sleeping.
Kafalas says she deals with her chronic insomnia by hitting the gym for two hours almost every evening, lifting weights and pounding
the treadmill until she feels tired enough to fall asleep.
"Sometimes I forget what day it is. Is it Tuesday? And then I'll think of what TV show ran the night before," she said. "Waiting
is the toughest part."
AGE A FACTOR
Generally, the likelihood of long-term unemployment increases with age, experts say. A report by the National Employment Law Project
this month found that nearly half of those who were unemployed for six months or longer were at least 45 years old. Those between
16 and 24 made up just 14 percent.
Tell that to Adam Blank, 24, who has been living with his girlfriend and her parents at their Martinsville home since losing his
sales job at Best Buy a year and half ago.
Blank, who graduated from Rutgers with a major in communications, says he feels like a burden sometimes, especially since his
girlfriend, Tracy Rosen, 24, works full-time at a local nonprofit. He shows her family gratitude with small chores, like taking out
the garbage, washing dishes, sweeping floors and doing laundry.
Still, he often feels inadequate.
"All I'm doing on an almost daily basis is sitting around the house trying to keep myself from going stir-crazy," said Blank,
who dreams of starting a social media company.
When he is feeling particularly low, Blank said he turns to a tactic employed by prisoners of war in Vietnam: "They used to build
dream houses in their head to help keep their sanity. It's really just imagining a place I can call my own."
LESSONS LEARNED
Meanwhile, Gwiazdowski, ever the optimist, says unemployment has taught him a few things.
He has learned, for example, how to quickly assess an interviewer's age and play up or down his work experience accordingly --
he doesn't want to appear "threatening" to a potential employer who is younger. He has learned that by occasionally deleting and
reuploading his resume to job sites, his entry appears fresh.
"It's almost like a game," he said, laughing. "You are desperate, but you can't show it."
But there are days when he just can't find any humor in his predicament -- like when he finishes a great interview but receives
no offer, or when he hears a fellow job seeker finally found work and feels a slight twinge of jealousy.
"That's what I'm missing -- putting on that shirt and tie in the morning and going to work," he said.
The memory of getting dressed for work is still so vivid, Gwiazdowski says, that he has to believe another job is just around
the corner.
"You always have to hope that that morning when you get up, it's going to be the day," he said.
"Today is going to be the day that something is going to happen."
I collect from the state of iowa, was on tier I and when the gov't recessed without passing extension, iowa stopped paying
tier I claims that were already open, i was scheduled to be on tier I until july 15th, and its gone now, as a surprise, when i
tried to claim my week this week i was notified. SURPRISE, talk about stress.
This is terrible....just wait until RIF'd teachers hit the unemployment offices....but then, this is what NJ wanted...fired
teachers who are to blame for the worst recession our country has seen in 150 years...thanks GWB.....thanks Donald Rumsfeld......thanks
Dick Cheney....thanks Karl "Miss Piggy" Rove...and thank you Mr. Big Boy himself...Gov Krispy Kreame!
For readers who care about this nation's unemployed- Call your Senators to pass HR 4213, the "Extenders" bill. Unfortunately,
it does not add UI benefits weeks, however it DOES continue the emergency federal tiers of UI. If it does not pass this week many
of us are cut off at 26 wks. No tier 1, 2 -nothing.
The longer you are unemployed, the more you are effected by those factors.
Notable quotes:
"... The good news is that only a relatively small number of people are seriously affected by the stress of unemployment to the extent they need medical assistance. Most people don't get to the serious levels of stress, and much as they loathe being unemployed, they suffer few, and minor, ill effects. ..."
"... Worries about income, domestic problems, whatever, the list is as long as humanity. The result of stress is a strain on the nervous system, and these create the physical effects of the situation over time. The chemistry of stress is complex, but it can be rough on the hormonal system. ..."
"... Not at all surprisingly, people under stress experience strong emotions. It's a perfectly natural response to what can be quite intolerable emotional strains. It's fair to say that even normal situations are felt much more severely by people already under stress. Things that wouldn't normally even be issues become problems, and problems become serious problems. Relationships can suffer badly in these circumstances, and that, inevitably, produces further crises. Unfortunately for those affected, these are by now, at this stage, real crises. ..."
"... Some people are stubborn enough and tough enough mentally to control their emotions ruthlessly, and they do better under these conditions. Even that comes at a cost, and although under control, the stress remains a problem. ..."
"... One of the reasons anger management is now a growth industry is because of the growing need for assistance with severe stress over the last decade. This is a common situation, and help is available. ..."
"... Depression is universally hated by anyone who's ever had it. ..."
"... Very important: Do not, under any circumstances, try to use drugs or alcohol as a quick fix. They make it worse, over time, because they actually add stress. Some drugs can make things a lot worse, instantly, too, particularly the modern made-in-a-bathtub variety. They'll also destroy your liver, which doesn't help much, either. ..."
"... You don't have to live in a gym to get enough exercise for basic fitness. A few laps of the pool, a good walk, some basic aerobic exercises, you're talking about 30-45 minutes a day. It's not hard. ..."
It's almost impossible to describe the various psychological impacts, because there are so many. There are sometimes serious consequences,
including suicide, and, some would say worse, chronic depression.
There's not really a single cause and effect. It's a compound effect, and unemployment, by adding stress, affects people, often
badly.
The world doesn't need any more untrained psychologists, and we're not pretending to give medical advice. That's for professionals.
Everybody is different, and their problems are different. What we can do is give you an outline of the common problems, and what
you can do about them.
The good news is that only a relatively small number of people are seriously affected by the stress of unemployment to the extent
they need medical assistance. Most people don't get to the serious levels of stress, and much as they loathe being unemployed, they
suffer few, and minor, ill effects.
For others, there are a series of issues, and the big three are:
Stress
Anger, and other negative emotions
Depression
Stress
Stress is Stage One. It's a natural result of the situation. Worries about income, domestic problems, whatever, the list is as
long as humanity. The result of stress is a strain on the nervous system, and these create the physical effects of the situation
over time. The chemistry of stress is complex, but it can be rough on the hormonal system.
Over an extended period, the body's natural hormonal balances are affected, and this can lead to problems. These are actually
physical issues, but the effects are mental, and the first obvious effects are, naturally, emotional.
Anger, and other negative emotions
Not at all surprisingly, people under stress experience strong emotions. It's a perfectly natural response to what can be quite
intolerable emotional strains. It's fair to say that even normal situations are felt much more severely by people already under stress.
Things that wouldn't normally even be issues become problems, and problems become serious problems. Relationships can suffer badly in these circumstances, and that, inevitably, produces further crises. Unfortunately for those
affected, these are by now, at this stage, real crises.
If the actual situation was already bad, this mental state makes it a lot worse. Constant aggravation doesn't help people to keep
a sense of perspective. Clear thinking isn't easy when under constant stress.
Some people are stubborn enough and tough enough mentally to control their emotions ruthlessly, and they do better under these
conditions. Even that comes at a cost, and although under control, the stress remains a problem.
One of the reasons anger management is now a growth industry is because of the growing need for assistance with severe stress
over the last decade. This is a common situation, and help is available.
If you have reservations about seeking help, bear in mind it can't possibly be any worse than the problem.
Depression
Depression is universally hated by anyone who's ever had it. This is the next stage, and it's caused by hormonal imbalances which
affect serotonin. It's actually a physical problem, but it has mental effects which are sometimes devastating, and potentially life
threatening.
The common symptoms are:
Difficulty in focusing mentally, thoughts all over the place in no logical order
Fits of crying for no known reason
Illogical, or irrational patterns of thought and behavior
Sadness
Suicidal thinking
It's a disgusting experience. No level of obscenity could possibly describe it. Depression is misery on a level people wouldn't
conceive in a nightmare. At this stage the patient needs help, and getting it is actually relatively easy. It's convincing the person they need to do something about it that's difficult. Again, the mental state is working against the person. Even admitting there's a problem is hard for many people in this condition.
Generally speaking, a person who is trusted is the best person to tell anyone experiencing the onset of depression to seek help. Important: If you're experiencing any of those symptoms:
Get on the phone and make an appointment to see your doctor. It takes half an hour for a diagnosis, and you can be on your
way home with a cure in an hour. You don't have to suffer. The sooner you start to get yourself out of depression, the better.
Avoid any antidepressants with the so-called withdrawal side effects. They're not too popular with patients, and are under
some scrutiny. The normal antidepressants work well enough for most people.
Very important: Do not, under any circumstances, try to use drugs or alcohol as a quick fix. They make it worse, over time, because they actually add stress. Some drugs can make things a lot worse, instantly, too, particularly
the modern made-in-a-bathtub variety. They'll also destroy your liver, which doesn't help much, either.
Alcohol, in particular, makes depression much worse. Alcohol is a depressant, itself, and it's also a nasty chemical mix with
all those stress hormones.
If you've ever had alcohol problems, or seen someone with alcohol wrecking their lives, depression makes things about a million
times worse.
Just don't do it. Steer clear of any so-called stimulants, because they don't mix with antidepressants, either.
Unemployment and staying healthy
The above is what you need to know about the risks of unemployment to your health and mental well being.
These situations are avoidable.
Your best defense against the mental stresses and strains of unemployment, and their related problems is staying healthy.
We can promise you that is nothing less than the truth. The healthier you are, the better your defenses against stress, and the
more strength you have to cope with situations.
Basic health is actually pretty easy to achieve:
Diet
Eat real food, not junk, and make sure you're getting enough food. Your body can't work with resources it doesn't have. Good food
is a real asset, and you'll find you don't get tired as easily. You need the energy reserves.
Give yourself a good selection of food that you like, that's also worth eating.
The good news is that plain food is also reasonably cheap, and you can eat as much as you need. Basic meals are easy enough to
prepare, and as long as you're getting all the protein veg and minerals you need, you're pretty much covered.
You can also use a multivitamin cap, or broad spectrum supplements, to make sure you're getting all your trace elements. Also
make sure you're getting the benefits of your food by taking acidophilus or eating yogurt regularly.
Exercise
You don't have to live in a gym to get enough exercise for basic fitness. A few laps of the pool, a good walk, some basic aerobic
exercises, you're talking about 30-45 minutes a day. It's not hard.
Don't just sit and suffer
If anything's wrong, check it out when it starts, not six months later. Most medical conditions become serious when they're allowed
to get worse.
For unemployed people the added risk is also that they may prevent you getting that job, or going for interviews. If something's
causing you problems, get rid of it.
Nobody who's been through the blender of unemployment thinks it's fun.
Anyone who's really done it tough will tell you one thing:
Don't be a victim. Beat the problem, and you'll really appreciate the feeling.
"... According to Amazon's metrics, I was one of their most productive order pickers -- I was a machine, and my pace would accelerate throughout the course of a shift. What they didn't know was that I stayed fast because if I slowed down for even a minute, I'd collapse from boredom and exhaustion ..."
"... toiling in some remote corner of the warehouse, alone for 10 hours, with my every move being monitored by management on a computer screen. ..."
"... ISS could simply deactivate a worker's badge and they would suddenly be out of work. They treated us like beggars because we needed their jobs. Even worse, more than two years later, all I see is: Jeff Bezos is hiring. ..."
"... I have never felt more alone than when I was working there. I worked in isolation and lived under constant surveillance ..."
"... That was 2012 and Amazon's labor and business practices were only beginning to fall under scrutiny. ..."
"... I received $200 a week for the following six months and I haven't had any source of regular income since those benefits lapsed. I sold everything in my apartment and left Pennsylvania as fast as I could. I didn't know how to ask for help. I didn't even know that I qualified for food stamps. ..."
Nichole Gracely has a master's degree and was one of Amazon's best order pickers. Now, after
protesting the company, she's homeless.
I am homeless. My worst days now are better than my best days working at Amazon.
According to Amazon's metrics, I was one of their most productive order pickers -- I was a machine,
and my pace would accelerate throughout the course of a shift. What they didn't know was that
I stayed fast because if I slowed down for even a minute, I'd collapse from boredom and exhaustion.
During peak season, I trained incoming temps regularly. When that was over, I'd be an ordinary
order picker once again, toiling in some remote corner of the warehouse, alone for 10 hours,
with my every move being monitored by management on a computer screen.
Superb performance did not guarantee job security. ISS is the temp agency that provides warehouse
labor for Amazon and they are at the center of the SCOTUS case Integrity Staffing Solutions
vs. Busk. ISS could simply deactivate a worker's badge and they would suddenly be out of work.
They treated us like beggars because we needed their jobs. Even worse, more than two years later,
all I see is: Jeff Bezos is hiring.
I have never felt more alone than when I was working there. I worked in isolation and lived
under constant surveillance. Amazon could mandate overtime and I would have to comply with any
schedule change they deemed necessary, and if there was not any work, they would send us home
early without pay. I started to fall behind on my bills.
At some point, I lost all fear. I had already been through hell. I protested Amazon. The
gag order was lifted and I was free to speak. I spent my last days in a lovely apartment constructing
arguments on discussion boards, writing articles and talking to reporters. That was 2012 and
Amazon's labor and business practices were only beginning to fall under scrutiny. I walked away
from Amazon's warehouse and didn't have any other source of income lined up.
I cashed in on my excellent credit, took out cards, and used them to pay rent and buy food
because it would be six months before I could receive my first unemployment compensation check.
I received $200 a week for the following six months and I haven't had any source of regular
income since those benefits lapsed. I sold everything in my apartment and left Pennsylvania
as fast as I could. I didn't know how to ask for help. I didn't even know that I qualified for
food stamps.
I furthered my Amazon protest while homeless in Seattle. When the Hachette dispute flared
up I "flew a sign," street parlance for panhandling with a piece of cardboard: "I was an order
picker at amazon.com. Earned degrees. Been published. Now,
I'm homeless, writing and doing this. Anything helps."
I have made more money per word with my signs than I will probably ever earn writing, and
I make more money per hour than I will probably ever be paid for my work. People give me money
and offer well wishes and I walk away with a restored faith in humanity.
I flew my protest sign outside Whole Foods while Amazon corporate employees were on lunch
break, and they gawked. I went to my usual flying spots around Seattle and made more money per
hour protesting Amazon with my sign than I did while I worked with them. And that was in Seattle.
One woman asked, "What are you writing?" I told her about the descent from working poor to homeless,
income inequality, my personal experience. She mentioned Thomas Piketty's book, we chatted a
little, she handed me $10 and wished me luck. Another guy said, "Damn, that's a great story!
I'd read it," and handed me a few bucks.
While lazy people do happen, this compulsive quest for "high performance" is one of the most disgusting futures of
neoliberlaism. Cemented by annual "performance reviews" which are the scam.
An overwhelming majority
of bosses and employees think that some of their colleagues consistently underperform.
An Investors in People survey found 75% of bosses and 80% of staff thought some colleagues
were "dead wood" - and the main reason was thought to be laziness. Nearly half of employees added they worked closely with someone who they thought was lazy
and not up to the job. However, four out of ten workers said that their managers did nothing about colleagues not
pulling their weight.
According to Investors in People, the problem of employees not doing their jobs properly
seemed to be more prevalent in larger organizations. The survey found that 84% of workers in organizations with more than 1,000 employees thought
they had an underperforming colleague, compared with 50% in firms with fewer than 50 staff.
Tell tale signs
The survey identified the tell-tale signs of people not pulling their weight, according to
both employers and employees, including:
Prioritizing personal life over work
Refusing extra responsibility
Passing off colleagues' work as their own
Both employers and employees agreed that the major reason for someone failing in their job
was sheer laziness. "Dead wood" employees can have a stark effect on their colleagues' physical and mental
well-being, the survey found. Employees reported that they had to work longer hours to cover for shirking colleagues and
felt undervalued as a result. Ultimately, working alongside a lazy colleague could prompt workers to look for a new job
the survey found.
But according to Nick Parfitt, spokesman for human resources firm Cubiks, an unproductive
worker isn't necessarily lazy.
"It can be too easy to brand a colleague lazy," he said. "They may have genuine personal problems or are being asked to do a job that they have not
been given the training to do. "The employer must look out for the warning signs of a worker becoming de-motivated - hold
regular conversations and appraisals with staff."
However, Mr Parfitt added that ultimately lazy employees may have to be shown the door. "The cost of sacking someone can be colossal and damaging to team morale but sometimes it
maybe the only choice."
"... Total 2015 gross passenger payments were 200% higher than 2014, but Uber corporate revenue improved 300% because Uber cut the driver share of passenger revenue from 83% to 77%. This was an effective $500 million wealth transfer from drivers to Uber's investors. ..."
"... Uber's P&L gains were wiped out by higher non-EBIDTAR expense. Thus the 300% Uber revenue growth did not result in any improvement in Uber profit margins. ..."
"... In 2016, Uber unilaterally imposed much larger cuts in driver compensation, costing drivers an additional $3 billion. [6] Prior to Uber's market entry, the take home pay of big-city cab drivers in the US was in the $12-17/hour range, and these earnings were possible only if drivers worked 65-75 hours a week. ..."
"... An independent study of the net earnings of Uber drivers (after accounting for the costs of the vehicles they had to provide) in Denver, Houston and Detroit in late 2015 (prior to Uber's big 2016 cuts) found that driver earnings had fallen to the $10-13/hour range. [7] Multiple recent news reports have documented how Uber drivers are increasing unable to support themselves from their reduced share of passenger payments. [8] ..."
"... Since mass driver defections would cause passenger volume growth to collapse completely, Uber was forced to reverse these cuts in 2017 and increased the driver share from 68% to 80%. This meant that Uber's corporate revenue, which had grown over 300% in 2015 and over 200% in 2016 will probably only grow by about 15% in 2017. ..."
"... Socialize the losses, privatize the gains, VC-ize the subsidies. ..."
"... The cold hard truth is that Uber is backed into a corner with severely limited abilities to tweak the numbers on either the supply or the demand side: cut driver compensation and they trigger driver churn (as has already been demonstrated), increase fare prices for riders and riders defect to cheaper alternatives. ..."
"... "Growth and Efficiency" are the sine qua non of Neoliberalism. Kalanick's "hype brilliance" was to con the market with "revenue growth" and signs ..."
Uber lost $2.5 billion in 2015, probably lost $4 billion in 2016, and is on track to lose $5
billion in 2017.
The top line on the table below shows is total passenger payments, which must be split
between Uber corporate and its drivers. Driver gross earnings are substantially higher than
actual take home pay, as gross earning must cover all the expenses drivers bear, including
fuel, vehicle ownership, insurance and maintenance.
Most of the "profit" data released by Uber over time and discussed in the press is not true
GAAP (generally accepted accounting principles) profit comparable to the net income numbers
public companies publish but is EBIDTAR contribution. Companies have significant leeway as to
how they calculate EBIDTAR (although it would exclude interest, taxes, depreciation,
amortization) and the percentage of total costs excluded from EBIDTAR can vary significantly
from quarter to quarter, given the impact of one-time expenses such as legal settlements and
stock compensation. We only have true GAAP net profit results for 2014, 2015 and the 2nd/3rd
quarters of 2017, but have EBIDTAR contribution numbers for all other periods.
[5]
Uber had GAAP net income of negative $2.6 billion in 2015, and a negative profit margin of
132%. This is consistent with the negative $2.0 billion loss and (143%) margin for the year
ending September 2015 presented in part one of the NC Uber series over a year ago.
No GAAP profit results for 2016 have been disclosed, but actual losses likely exceed $4
billion given the EBIDTAR contribution of negative $3.2 billion. Uber's GAAP losses for the 2nd
and 3rd quarters of 2017 were over $2.5 billion, suggesting annual losses of roughly $5
billion.
While many Silicon Valley funded startups suffered large initial losses, none of them lost
anything remotely close to $2.6 billion in their sixth year of operation and then doubled their
losses to $5 billion in year eight. Reversing losses of this magnitude would require the
greatest corporate financial turnaround in history.
No evidence of significant efficiency/scale gains; 2015 and 2016 margin improvements
entirely explained by unilateral cuts in driver compensation, but losses soared when Uber had
to reverse these cuts in 2017.
Total 2015 gross passenger payments were 200% higher than 2014, but Uber corporate
revenue improved 300% because Uber cut the driver share of passenger revenue from 83% to 77%.
This was an effective $500 million wealth transfer from drivers to Uber's investors. These
driver compensation cuts improved Uber's EBIDTAR margin, but Uber's P&L gains were
wiped out by higher non-EBIDTAR expense. Thus the 300% Uber revenue growth did not result in
any improvement in Uber profit margins.
In 2016, Uber unilaterally imposed much larger cuts in driver compensation, costing
drivers an additional $3 billion.
[6] Prior to Uber's market entry, the take home pay of big-city cab drivers in the US was
in the $12-17/hour range, and these earnings were possible only if drivers worked 65-75 hours a
week.
An independent study of the net earnings of Uber drivers (after accounting for the costs
of the vehicles they had to provide) in Denver, Houston and Detroit in late 2015 (prior to
Uber's big 2016 cuts) found that driver earnings had fallen to the $10-13/hour range.
[7] Multiple recent news reports have documented how Uber drivers are increasing unable to
support themselves from their reduced share of passenger payments.
[8]
A business model where profit improvement is hugely dependent on wage cuts is unsustainable,
especially when take home wages fall to (or below) minimum wage levels. Uber's primary focus
has always been the rate of growth in gross passenger revenue, as this has been a major
justification for its $68 billion valuation. This growth rate came under enormous pressure in
2017 given Uber efforts to raise fares, major increases in driver turnover as wages fell,
[9] and the avalanche of adverse publicity it was facing.
Since mass driver defections would cause passenger volume growth to collapse completely,
Uber was forced to reverse these cuts in 2017 and increased the driver share from 68% to 80%.
This meant that Uber's corporate revenue, which had grown over 300% in 2015 and over 200% in
2016 will probably only grow by about 15% in 2017.
"Uber's business model can never produce sustainable profits"
Two words not in my vocabulary are "Never" and "Always", that is a pretty absolute
statement in an non-absolute environment. The same environment that has produced the "Silicon
Valley Growth Model", with 15x earnings companies like NVIDA, FB and Tesla (Average
earnings/stock price ratio in dot com bubble was 10x) will people pay ridiculous amounts of
money for a company with no underlying fundamentals you damn right they will! Please stop
with the I know all no body knows anything, especially the psychology and irrationality of
markets which are made up of irrational people/investors/traders.
My thoughts exactly. Seems the only possible recovery for the investors is a perfectly
engineered legendary pump and dump IPO scheme. Risky, but there's a lot of fools out there
and many who would also like to get on board early in the ride in fear of missing out on all
the money to be hoovered up from the greater fools. Count me out.
The author clearly distinguishes between GAAP profitability and valuations, which is after
all rather the point of the series. And he makes a more nuanced point than the half sentence
you have quoted without context or with an indication that you omitted a portion. Did you
miss the part about how Uber would have a strong incentive to share the evidence of a network
effect or other financial story that pointed the way to eventual profit? Otherwise (my words)
it is the classic sell at a loss, make it up with volume path to liquidation.
apples and oranges comparison, nvidia has lots and lots of patented tech that produces
revenue, facebook has a kajillion admittedly irrational users, but those users drive massive
ad sales (as just one example of how that company capitalizes itself) and tesla makes an
actual car, using technology that inspires it's buyers (the put your money where your mouth
is crowd and it can't be denied that tesla, whatever it's faults are, battery tech is not one
of them and that intellectual property is worth a lot, and tesla's investors are in on that
real business, profitable or otherwise)
Uber is an iphone app. They lose money and have no
path to profitability (unless it's the theory you espouse that people are unintelligent so
even unintelligent ideas work to fleece them). This article touches on one of the great
things about the time we now inhabit, uber drivers could bail en masse, there are two sides
to the low attachment employees who you can get rid of easily. The drivers can delete the
uber app as soon as another iphone app comes along that gets them a better return
For many air travelers, getting to and from the airport has long been part of the whole
miserable experience. Do they drive and park in some distant lot? Take mass transit or a
taxi? Deal with a rental car?
Ride-hailing services like Uber and Lyft are quickly changing those calculations. That
has meant a bit less angst for travelers.
But that's not the case for airports. Travelers' changing habits, in fact, have begun to
shake the airports' financial underpinnings. The money they currently collect from
ride-hailing services do not compensate for the lower revenues from the other sources.
At the same time, some airports have had to add staff to oversee the operations of the
ride-hailing companies, the report said. And with more ride-hailing vehicles on the roads
outside terminals,
there's more congestion.
Socialize the losses, privatize the gains, VC-ize the subsidies.
The cold hard truth is that Uber is backed into a corner with severely limited abilities
to tweak the numbers on either the supply or the demand side: cut driver compensation and
they trigger driver churn (as has already been demonstrated), increase fare prices for riders
and riders defect to cheaper alternatives. The only question is how long can they keep the
show going before the lights go out, slick marketing and propaganda can only take you so far,
and one assumes the dumb money has a finite supply of patience and will at some point begin
asking the tough questions.
The irony is that Uber would have been a perfectly fine, very profitable mid-sized company
if Uber stuck with its initial model -- sticking to dense cities with limited parking,
limiting driver supply, and charging a premium price for door-to-door delivery, whether by
livery or a regular sedan. And then perhaps branching into robo-cars.
But somehow Uber/board/Travis got suckered into the siren call of self-driving cars,
triple-digit user growth, and being in the top 100 US cities and on every continent.
I've shared a similar sentiment in one of the previous posts about Uber. But operating
profitably in decent sized niche doesn't fit well with ambitions of global domination. For
Uber to be "right-sized", an admission of folly would have to be made, its managers and
investors would have to transcend the sunk cost fallacy in their strategic decision making,
and said investors would have to accept massive hits on their invested capital. The cold,
hard reality of being blindsided and kicked to the curb in the smartphone business forced
RIM/Blackberry to right-size, and they may yet have a profitable future as an enterprise
facing software and services company. Uber would benefit from that form of sober mindedness,
but I wouldn't hold my breath.
I know nothing about Softbank or its management, but I do know that the Japanese were the
dumb money rubes in the late '80's, overpaying for trophy real estate they lost billions
on.
Until informed otherwise, that's my default assumption
Softbank possibly looking to buy more Uber shares at a 30% discount is very odd. Uber had
a Series G funding round in June 2016 where a $3.5
billion investment from Saudi Arabia's Public Investment Fund resulted in its current $68
billion valuation. Now apparently Softbank wants to lead a new $6 billion funding round to
buy the shares of Uber employees and early investors at a 30% discount from this last
"valuation". It's odd because Saudi Arabia's Public Investment Fund has pledged
$45 billion to SoftBank's Vision Fund , an amount which was supposed to come from the
proceeds of its pending Aramco IPO. If the Uber bid is linked to SoftBank's Vision Fund, or
KSA money, then its not clear why this investor might be looking to literally 'double down'
from $3.5 billion o $6 billion on a declining investment.
"Growth and Efficiency" are the sine qua non of Neoliberalism. Kalanick's "hype
brilliance" was to con the market with "revenue growth" and signs of efficiency, and
hopes of greater efficiency, and make most people just overlook the essential fact
that Uber is the most unprofitable company of all time!
What comprises "Uber Expenses"? 2014 – $1.06 billion; 2015 $3.33 billion; 2016 $9.65
billion; forecast 2017 $11.418 billion!!!!!! To me this is the big question – what are
they spending $10 billion per year on?
ALso – why did driver share go from 68% in 2016 to 80% in 2017? If you use 68% as in
2016, 2017 Uber revenue is $11.808 billion, which means a bit better than break-even EBITDA,
assuming Uber expenses are as stated $11.428 billion.
Perhaps not so bleak as the article presents, although I would not invest in this
thing.
I have the same question: What comprises over 11 billion dollars in expenses in 2017?
Could it be they are paying out dividends to the early investors? Which would mean they are
cannibalizing their own company for the sake of the VC! How long can this go on before
they'll need a new infusion of cash?
Oh article does answer your 2nd question. Read this paragraph:-
Since mass driver defections would cause passenger volume growth to collapse completely
, Uber was forced to reverse these cuts in 2017 and increased the driver share from 68% to
80%. This meant that Uber's corporate revenue, which had grown over 300% in 2015 and over
200% in 2016 will probably only grow by about 15% in 2017.
As for the 1st, read this line in the article:-
There are undoubtedly a number of things Uber could do to reduce losses at the margin,
but it is difficult to imagine it could suddenly find the $4-5 billion in profit
improvement needed merely to reach breakeven.
in addition to all the points listed in the article/comments, the absolute biggest flaw
with Uber is that Uber HQ conditioned its customers on (a) cheap fares and (b) that a car is
available within minutes (1-5 if in a big city).
Those two are not mutually compatible in the long-term.
Thus (a) "We cost less" and (b) "We're more convenient" -- aren't those also the
advantages that Walmart claims and feeds as a steady diet to its ever hungry consumers? Often
if not always, disruption may repose upon delusion.
When this Uber madness blows up, I wonder if people will finally begin to discuss the
brutal reality of Silicon Valley's so called "disruption".
It is heavily built in around the idea of economic exploitation. Uber drivers are often,
especially when the true costs to operate an Uber including the vehicle depreciation are
factored in, making not very much per hour driven, especially if they don't get the surge
money.
Instacart is another example. They are paying the deliver operators very little.
At a fundamental level, I think that the Silicon Valley "disruption" model only works for
markets (like software) where the marginal cost for production is de minimus and the
products can be protected by IP laws. Volume and market power really work in those cases. But
out here in meat-space, where actual material and labor are big inputs to each item sold, you
can never just sit back on your laurels and rake in the money. Somebody else will always be
able to come and and make an equivalent product. If they can do it more cheaply, you are in
trouble.
There aren't that many areas in goods and services where the marginal costs are very
low.
Software is actually quite unique in that regard, costing merely the bandwidth and
permanent storage space to store.
Let's see:
1. From the article, they cannot go public and have limited ways to raise more money. An
IPO with its more stringent disclosure requirements would expose them.
2. They tried lowering driver compensation and found that model unsustainable.
3. There are no benefits to expanding in terms of economies of scale.
From where I am standing, it looks like a lot of industries gave similar barriers. Silicon
Valley is not going to be able to disrupt those.
Tesla, another Silicon Valley company seems to be struggling to mass produce its Model 3
and deliver an electric car that breaks even, is reliable, while disrupting the industry in
the ways that Elon Musk attempted to hype up.
So that basically leaves services and manufacturing out for Silicon Valley disruption.
UBER has become a "too big to fail" startup because of all the different tentacles of
capital from various Tier 1 VCs and investment bankers.
VCs have admitted openly that UBER is a subsidized business, meaning it's product is sold
below market value, and the losses reflect that subsidization. The whole "2 sided platform"
argument is just marketecture to hustle more investors. It's a form of service "dumping" that
puts legacy businesses into bankruptcy. Back during the dotcom bubble one popular investment
banker (Paul Deninger) characterized this model as "Terrorist Competition", i.e. coffers full
of invested cash to commoditize the market and drive out competition.
UBER is an absolute disaster that has forked the startup model in Silicon Valley in order
to drive total dependence on venture capital by founders. And its current diversification
into "autonomous vehicles", food delivery, et al are simply more evidence that the company
will never be profitable due to its whacky "blitzscaling" approach of layering on new
"businesses" prior to achieving "fit" in its current one.
It's economic model has also metastasized into a form of startup cancer that is killing
Silicon Valley as a "technology" innovator. Now it's all cargo cult marketing BS tied to
"strategic capital".
UBER is the victory of venture capital and user subsidized startups over creativity by
real entrepreneurs.
It's shadow is long and that's why this company should be ..wait for it UNBUNDLED (the new
silicon valley word attached to that other BS religion called "disruption"). Call it a great
unbundling and you can break up this monster corp any way you want.
2. The elevator pitch for Uber: subsidize rides to attract customers, put the competition
out of business, and then enjoy an unregulated monopoly, all while exploiting economically
ignorant drivers–ahem–"partners."
3. But more than one can play that game, and
4. Cab and livery companies are finding ways to survive!
If subsidizing rides is counted as an expense, (not being an accountant, I would guess it
so), then whether the subsidy goes to the driver or the passenger, that would account for the
ballooning expenses, to answer my own question. Otherwise, the overhead for operating what
Uber describes as a tech company should be minimal: A billion should fund a decent
headquarters with staff, plus field offices in, say, 100 U.S. cities. However, their global
pretensions are probably burning cash like crazy. On top of that, I wonder what the exec
compensation is like?
After reading HH's initial series, I made a crude, back-of-the-envelope calculation that
Uber would run out of money sometime in the third fiscal quarter of 2018, but that was based
on assuming losses were stabilizing in the range of 3 billion a year. Not so, according to
the article. I think crunch time is rapidly approaching. If so, then SoftBank's tender offer
may look quite appetizing to VC firms and to any Uber employee able to cash in their options.
I think there is a way to make a re-envisioned Uber profitable, and with a more independent
board, they may be able to restructure the company to show a pathway to profitability before
the IPO. But time is running out.
A not insignificant question is the recruitment and retention of the front line
"partners." It would seem to me that at some point, Uber will run out of economically
ignorant drivers with good manners and nice cars. I would be very interested to know how many
drivers give up Uber and other ride-sharing gigs once the 1099's start flying at the
beginning of the year. One of the harsh realities of owning a business or being an contractor
is the humble fact that you get paid LAST!
We became instant Uber riders while spending holidays with relatives in San Diego. While
their model is indeed unique from a rider perspective, it was the driver pool that fascinates
me. These are not professional livery drivers, but rather freebooters of all stripes driving
for various reasons. The remuneration they receive cannot possibly generate much income after
expenses, never mind the problems associated with IRS filing as independent contractors.
One guy was just cruising listening to music; cooler to get paid for it than just sitting
home! A young lady was babbling and gesticulating non stop about nothing coherent and
appeared to be on some sort of stimulant. A foreign gentleman, very professional, drove for
extra money when not at his regular job. He was the only one who had actually bought a new
Prius for this gig, hoping to pay it off in two years.
This is indeed a brave new world. There was a period in Nicaragua just after the Contra
war ended when citizens emerged from their homes and hit the streets in large numbers,
desperately looking for income. Every car was a taxi and there was a bipedal mini Walmart at
every city intersection as individuals sold everything and anything in a sort of euphoric
optimism towards the future. Reality just hadn't caught up with them yet .
There is a flag --files-from that does exactly what you want. From man
rsync :
--files-from=FILE
Using this option allows you to specify the exact list of files to transfer (as read
from the specified FILE or - for standard input). It also tweaks the default behavior of
rsync to make transferring just the specified files and directories easier:
The --relative (-R) option is implied, which preserves the path information that is
specified for each item in the file (use --no-relative or --no-R if you want to turn that
off).
The --dirs (-d) option is implied, which will create directories specified in the
list on the destination rather than noisily skipping them (use --no-dirs or --no-d if you
want to turn that off).
The --archive (-a) option's behavior does not imply --recursive (-r), so specify it
explicitly, if you want it.
These side-effects change the default state of rsync, so the position of the
--files-from option on the command-line has no bearing on how other options are parsed
(e.g. -a works the same before or after --files-from, as does --no-R and all other
options).
The filenames that are read from the FILE are all relative to the source dir -- any
leading slashes are removed and no ".." references are allowed to go higher than the source
dir. For example, take this command:
rsync -a --files-from=/tmp/foo /usr remote:/backup
If /tmp/foo contains the string "bin" (or even "/bin"), the /usr/bin directory will be
created as /backup/bin on the remote host. If it contains "bin/" (note the trailing slash),
the immediate contents of the directory would also be sent (without needing to be
explicitly mentioned in the file -- this began in version 2.6.4). In both cases, if the -r
option was enabled, that dir's entire hierarchy would also be transferred (keep in mind
that -r needs to be specified explicitly with --files-from, since it is not implied by -a).
Also note that the effect of the (enabled by default) --relative option is to duplicate
only the path info that is read from the file -- it does not force the duplication of the
source-spec path (/usr in this case).
In addition, the --files-from file can be read from the remote host instead of the local
host if you specify a "host:" in front of the file (the host must match one end of the
transfer). As a short-cut, you can specify just a prefix of ":" to mean "use the remote end
of the transfer". For example:
rsync -a --files-from=:/path/file-list src:/ /tmp/copy
This would copy all the files specified in the /path/file-list file that was located on
the remote "src" host.
If the --iconv and --protect-args options are specified and the --files-from filenames
are being sent from one host to another, the filenames will be translated from the sending
host's charset to the receiving host's charset.
NOTE: sorting the list of files in the --files-from input helps rsync to be more
efficient, as it will avoid re-visiting the path elements that are shared between adjacent
entries. If the input is not sorted, some path elements (implied directories) may end up
being scanned multiple times, and rsync will eventually unduplicate them after they get
turned into file-list elements.
Note that you still have to specify the directory where the files listed are located, for
instance: rsync -av --files-from=file-list . target/ for copying files from the
current dir. – Nicolas Mattia
Feb 11 '16 at 11:06
if the files-from file has anything starting with .. rsync appears to ignore the
.. giving me an error like rsync: link_stat
"/home/michael/test/subdir/test.txt" failed: No such file or directory (in this case
running from the "test" dir and trying to specify "../subdir/test.txt" which does exist.
– Michael
Nov 2 '16 at 0:09
xxx,
--files-from= parameter needs trailing slash if you want to keep the absolute
path intact. So your command would become something like below:
rsync -av --files-from=/path/to/file / /tmp/
This could be done like there are a large number of files and you want to copy all files
to x path. So you would find the files and throw output to a file like below:
But ultimately, the way to get user data boils down to the basic rules of usability
Watch what people actually do.
Do not believe what people say they do.
Definitely don't believe what people predict they may do in the future.
... ... ...
So, do users know what they want? No, no, and no. Three times no.
Finally, you must consider how and when to solicit feedback. Although it might be tempting
to simply post a survey online, you're unlikely to get reliable input (if you get any at all).
Users who see the survey and fill it out before they've used the site will offer irrelevant
answers. Users who see the survey after they've used the site will most likely leave without
answering the questions. One question that does work well in a website survey is "Why are you
visiting our site today?" This question goes to users' motivation and they can answer it as
soon as they arrive.
I am a liberal arts person who wound up being a technology director. With the exception of
15 credit hours earned on my way to a Cisco Certified Network Associate credential, all of the
rest of my learning came on the job. I believe that learning what not to do from real
experiences is often the best teacher. However, those experiences can frequently come at the
expense of emotional pain. Prior to my Cisco experience, I had very little experience with
TCP/IP networking and the kinds of havoc I could create albeit innocently due to my lack of
understanding of the nuances of routing and DHCP.
At the time our school network was an active directory domain with DHCP and DNS provided by
a Windows 2000 server. All of our staff access to the email, Internet, and network shares were
served this way. I had been researching the use of the K12 Linux Terminal Server ( K12LTSP ) project and had built a Fedora
Core box with a single network card in it. I wanted to see how well my new project worked so
without talking to my network support specialists I connected it to our main LAN segment. In a
very short period of time our help desk phones were ringing with principals, teachers, and
other staff who could no longer access their email, printers, shared directories, and more. I
had no idea that the Windows clients would see another DHCP server on our network which was my
test computer and pick up an IP address and DNS information from it.
I had unwittingly created a "rogue" DHCP server and was oblivious to the havoc that it would
create. I shared with the support specialist what had happened and I can still see him making a
bee-line for that rogue computer, disconnecting it from the network. All of our client
computers had to be rebooted along with many of our switches which resulted in a lot of
confusion and lost time due to my ignorance. That's when I learned that it is best to test new
products on their own subnet.
"... What happened to the old "sysadmin" of just a few years ago? We've split what used to be the sysadmin into application teams, server teams, storage teams, and network teams. There were often at least a few people, the holders of knowledge, who knew how everything worked, and I mean everything. ..."
"... Now look at what we've done. Knowledge is so decentralized we must invent new roles to act as liaisons between all the IT groups. Architects now hold much of the high-level "how it works" knowledge, but without knowing how any one piece actually does work. In organizations with more than a few hundred IT staff and developers, it becomes nearly impossible for one person to do and know everything. This movement toward specializing in individual areas seems almost natural. That, however, does not provide a free ticket for people to turn a blind eye. ..."
"... Does your IT department function as a unit? Even 20-person IT shops have turf wars, so the answer is very likely, "no." As teams are split into more and more distinct operating units, grouping occurs. One IT budget gets split between all these groups. Often each group will have a manager who pitches his needs to upper management in hopes they will realize how important the team is. ..."
"... The "us vs. them" mentality manifests itself at all levels, and it's reinforced by management having to define each team's worth in the form of a budget. One strategy is to illustrate a doomsday scenario. If you paint a bleak enough picture, you may get more funding. Only if you are careful enough to illustrate the failings are due to lack of capital resources, not management or people. A manager of another group may explain that they are not receiving the correct level of service, so they need to duplicate the efforts of another group and just implement something themselves. On and on, the arguments continue. ..."
What happened to the old "sysadmin" of just a few years ago? We've split what used to be the sysadmin into application teams,
server teams, storage teams, and network teams. There were often at least a few people, the holders of knowledge, who knew how everything
worked, and I mean everything. Every application, every piece of network gear, and how every server was configured -- these
people could save a business in times of disaster.
Now look at what we've done. Knowledge is so decentralized we must invent new roles to act as liaisons between all the IT
groups. Architects now hold much of the high-level "how it works" knowledge, but without knowing how any one piece actually does
work. In organizations with more than a few hundred IT staff and developers, it becomes nearly impossible for one person to do and
know everything. This movement toward specializing in individual areas seems almost natural. That, however, does not provide a free
ticket for people to turn a blind eye.
Specialization
You know the story: Company installs new application, nobody understands it yet, so an expert is hired. Often, the person with
a certification in using the new application only really knows how to run that application. Perhaps they aren't interested in
learning anything else, because their skill is in high demand right now. And besides, everything else in the infrastructure is
run by people who specialize in those elements. Everything is taken care of.
Except, how do these teams communicate when changes need to take place? Are the storage administrators teaching the Windows
administrators about storage multipathing; or worse logging in and setting it up because it's faster for the storage gurus to
do it themselves? A fundamental level of knowledge is often lacking, which makes it very difficult for teams to brainstorm about
new ways evolve IT services. The business environment has made it OK for IT staffers to specialize and only learn one thing.
If you hire someone certified in the application, operating system, or network vendor you use, that is precisely what you get.
Certifications may be a nice filter to quickly identify who has direct knowledge in the area you're hiring for, but often they
indicate specialization or compensation for lack of experience.
Resource Competition
Does your IT department function as a unit? Even 20-person IT shops have turf wars, so the answer is very likely, "no."
As teams are split into more and more distinct operating units, grouping occurs. One IT budget gets split between all these groups.
Often each group will have a manager who pitches his needs to upper management in hopes they will realize how important the team
is.
The "us vs. them" mentality manifests itself at all levels, and it's reinforced by management having to define each team's
worth in the form of a budget. One strategy is to illustrate a doomsday scenario. If you paint a bleak enough picture, you may
get more funding. Only if you are careful enough to illustrate the failings are due to lack of capital resources, not management
or people. A manager of another group may explain that they are not receiving the correct level of service, so they need to duplicate
the efforts of another group and just implement something themselves. On and on, the arguments continue.
Most often, I've seen competition between server groups result in horribly inefficient uses of hardware. For example, what
happens in your organization when one team needs more server hardware? Assume that another team has five unused servers sitting
in a blade chassis. Does the answer change? No, it does not. Even in test environments, sharing doesn't often happen between IT
groups.
With virtualization, some aspects of resource competition get better and some remain the same. When first implemented, most
groups will be running their own type of virtualization for their platform. The next step, I've most often seen, is for test servers
to get virtualized. If a new group is formed to manage the virtualization infrastructure, virtual machines can be allocated to
various application and server teams from a central pool and everyone is now sharing. Or, they begin sharing and then demand their
own physical hardware to be isolated from others' resource hungry utilization. This is nonetheless a step in the right direction.
Auto migration and guaranteed resource policies can go a long way toward making shared infrastructure, even between competing
groups, a viable option.
Blamestorming
The most damaging side effect of splitting into too many distinct IT groups is the reinforcement of an "us versus them" mentality.
Aside from the notion that specialization creates a lack of knowledge, blamestorming is what this article is really about. When a project is delayed, it is all too easy to blame another group. The SAN people didn't allocate storage on time,
so another team was delayed. That is the timeline of the project, so all work halted until that hiccup was restored. Having someone
else to blame when things get delayed makes it all too easy to simply stop working for a while.
More related to the initial points at the beginning of this article, perhaps, is the blamestorm that happens after a system
outage.
Say an ERP system becomes unresponsive a few times throughout the day. The application team says it's just slowing down, and
they don't know why. The network team says everything is fine. The server team says the application is "blocking on IO," which
means it's a SAN issue. The SAN team say there is nothing wrong, and other applications on the same devices are fine. You've ran
through nearly every team, but without an answer still. The SAN people don't have access to the application servers to help diagnose
the problem. The server team doesn't even know how the application runs.
See the problem? Specialized teams are distinct and by nature adversarial. Specialized staffers often relegate
themselves into a niche knowing that as long as they continue working at large enough companies, "someone else" will take care
of all the other pieces.
I unfortunately don't have an answer to this problem. Maybe rotating employees between departments will help. They gain knowledge
and also get to know other people, which should lessen the propensity to view them as outsiders
The resentment against outsourcing was brewing for a long time.
Notable quotes:
"... Much of the frustration focused on the IT layoffs at Southern California Edison , which is cutting 500 IT workers after hiring two offshore outsourcing firms. This has become the latest example for critics of the visa program's capacity for abuse. ..."
"... Infosys whistleblower Jay Palmer, who testified, and is familiar with the displacement process, told Sessions said these workers will get sued if they speak out. "That's the fear and intimidation that these people go through - they're blindsided," said Palmer. ..."
"... Moreover, if IT workers refuse to train their foreign replacement, "they are going to be terminated with cause, which means they won't even get their unemployment insurance," said Ron Hira, an associate professor at Howard University, who also testified. Affected tech workers who speak out publicly and use their names, "will be blackballed from the industry," he said. ..."
"... Hatch, who is leading the effort to increase the H-1B cap, suggested a willingness to raise wage levels for H-1B dependent employers. They are exempt from U.S. worker protection rules if the H-1B worker is paid at least $60,000 or has a master's degree, a figure that was set in law in 1998. Hatch suggested a wage level of $95,000. ..."
"... Sen. Dick Durbin, (Dem-Ill.), who has joined with Grassley on legislation to impose some restrictions on H-1B visa use -- particularly in offshoring -- has argued for a rule that would keep large firms from having more than 50% of their workers on the visa. This so-called 50/50 rule, as Durbin has noted, has drawn much criticism from India, where most of the affected companies are located. ..."
"... "I want to put the H-1B factories out of business," said Durbin. ..."
"... Hal Salzman, a Rutgers University professor who studies STEM (Science, Technology, Engineering and Math) workforce issues, told the committee that the IT industry now fills about two-thirds of its entry-level positions with guest workers. "At the same time, IT wages have stagnated for over a decade," he said. ..."
"... H-1B supporters use demand for the visa - which will exceed the 85,000 cap -- as proof of economic demand. But Salzman argues that U.S. colleges already graduate more scientists and engineers than find employment in those fields, about 200,000 more. ..."
A Senate Judiciary Committee hearing today on the H-1B visa offered up a stew of policy arguments, positioning and frustration.
Much of the frustration focused on the
IT layoffs at Southern California Edison, which is cutting 500 IT workers after hiring two offshore outsourcing firms. This has
become the latest example for critics of the visa program's capacity for abuse.
Sen. Charles Grassley (R-Iowa), the committee chair who has long sought H-1B reforms, said he invited Southern California Edison
officials "to join us today" and testify. "I thought they would want to defend their actions and explain why U.S. workers have been
left high and dry," said Grassley. "Unfortunately, they declined my invitation."
The hearing, by the people picked to testify, was
weighted toward
critics of the program, prompting a response by industry groups.
Compete America, the Consumer Electronics Association, FWD.us, the U.S. Chamber of Commerce and many others submitted a letter
to the committee to rebut the "flawed studies" and "non-representative anecdotes used to create myths that suggest immigration harms
American and American workers."
The claim that H-1B critics are using "anecdotes" to make their points (which include layoff reports at firms such as Edison)
is a naked example of the pot calling the kettle black. The industry musters anecdotal stories in support of its positions readily
and often. It makes available to the press and congressional committees people who came to the U.S. on an H-1B visa who started a
business or took on a critical role in a start-up. These people are free to share their often compelling and admirable stories.
The voices of the displaced, who may be in fear of losing their homes, are thwarted by severance agreements.
The committee did hear from displaced workers, including some at Southern California Edison. But the communications with these
workers are being kept confidential.
"I got the letters here from people, without the names," said Sen. Jeff Sessions (R-Ala.). "If they say what they know and think
about this, they will lose the buy-outs."
Infosys whistleblower Jay Palmer, who testified, and is familiar with the displacement process, told Sessions said these workers
will get sued if they speak out. "That's the fear and intimidation that these people go through - they're blindsided," said Palmer.
Moreover, if IT workers refuse to train their foreign replacement, "they are going to be terminated with cause, which means
they won't even get their unemployment insurance," said Ron Hira, an associate professor at Howard University, who also testified.
Affected tech workers who speak out publicly and use their names, "will be blackballed from the industry," he said.
While lawmakers voiced either strong support or criticism of the program, there was interest in crafting legislation that impose
some restrictions on H-1B use.
"America and American companies need more high-skilled workers - this is an undeniable fact," said Sen. Orrin Hatch (R-Utah).
"America's high-skilled worker shortage has become a crisis."
Hatch, who is leading the effort to increase the H-1B cap, suggested a willingness to raise wage levels for H-1B dependent
employers. They are exempt from U.S. worker protection rules if the H-1B worker is paid at least $60,000 or has a master's degree,
a figure that was set in law in 1998. Hatch suggested a wage level of $95,000.
Sen. Dick Durbin, (Dem-Ill.), who has joined with Grassley on legislation to impose some restrictions on H-1B visa use --
particularly in offshoring -- has argued for a rule that would keep large firms from having more than 50% of their workers on the
visa. This so-called 50/50 rule, as Durbin has noted, has drawn much criticism from India, where most of the affected companies are
located.
"I want to put the H-1B factories out of business," said Durbin.
Durbin got some support for the 50/50 rule from one person testifying in support of expanding the cap, Bjorn Billhardt, the founder
and president of Enspire Learning, an Austin-based company. Enspire creates learning development tools; Billhardt came to the U.S.
as an exchange student and went from an H-1B visa to a green card to, eventually, citizenship.
"I actually think that's a reasonable provision," said Billhardt of the 50% visa limit. He said it could help, "quite a bit."
At the same time, he urged lawmakers to raise the cap to end the lottery system now used to distribute visas once that cap is reached.
Today's hearing went well beyond the impact of H-1B use by outsourcing firms to the displacement of workers overall.
Hal Salzman, a Rutgers University professor who studies STEM (Science, Technology, Engineering and Math) workforce issues,
told the committee that the IT industry now fills about two-thirds of its entry-level positions with guest workers. "At the same
time, IT wages have stagnated for over a decade," he said.
H-1B supporters use demand for the visa - which will exceed the 85,000 cap -- as proof of economic demand. But Salzman argues
that U.S. colleges already graduate more scientists and engineers than find employment in those fields, about 200,000 more.
"Asking domestic graduates, both native-born and immigrant, to compete with guest workers on wages is not a winning strategy for
strengthening U.S. science, technology and innovation," said Salzman.
BASH Shell: How To Redirect stderr To stdout ( redirect stderr to a File ) Posted on
March 12,
2008 March 12, 2008 in Categories BASH Shell , Linux , UNIX last updated March 12, 2008 Q. How do I
redirect stderr to stdout? How do I redirect stderr to a file?
A. Bash and other modern shell provides I/O redirection facility. There are 3 default
standard files (standard streams) open:
[a] stdin – Use to get input (keyboard) i.e. data going into a program.
[b] stdout – Use to write information (screen)
[c] stderr – Use to write error message (screen)
Understanding I/O streams
numbers
The Unix / Linux standard I/O streams with numbers:
Handle
Name
Description
0
stdin
Standard input
1
stdout
Standard output
2
stderr
Standard error
Redirecting the standard error stream to a file
The following will redirect program error message to a file called error.log: $ program-name 2> error.log
$ command1 2> error.log
Redirecting the standard error (stderr) and stdout to
file
Use the following syntax: $ command-name &>file
OR $ command > file-name 2>&1
Another useful example: # find /usr/home -name .profile 2>&1 | more
"I hunt sysadm" policy is the most realosnableif you you want to get into some coporate netwrok. So republication of this
three years old post is just a reminder. Any sysadmin that access corporates netwrok not from a dedicated computer using
VPN (corporate laptop) is engangering the corporation. As simple as that. The level of non-professionalism demonstrated by Hillary
Clinton IT staff suggests that this can be a problem in government too. After all Snowden documents now are studied by all major
intelligence agencies of the world.
This also outlines the main danger of "shadow It".
Notable quotes:
"... Journalist Ryan Gallagher reported that Edward Snowden , a former sys admin for NSA contractor Booz Allen Hamilton, provided The Intercept with the internal documents, including one from 2012 that's bluntly titled "I hunt sys admins." ..."
"... "Who better to target than the person that already has the 'keys to the kingdom'?" ..."
"... "They were written by an NSA official involved in the agency's effort to break into foreign network routers, the devices that connect computer networks and transport data across the Internet," ..."
"... "By infiltrating the computers of system administrators who work for foreign phone and Internet companies, the NSA can gain access to the calls and emails that flow over their networks." ..."
"... The latest leak suggests that some NSA analysts took a much different approach when tasked with trying to collect signals intelligence that otherwise might not be easily available. According to the posts, the author advocated for a technique that involves identifying the IP address used by the network's sys admin, then scouring other NSA tools to see what online accounts used those addresses to log-in. Then by using a ..."
"... that tricks targets into installing malware by being misdirected to fake Facebook servers, the intelligence analyst can hope that the sys admin's computer is sufficiently compromised and exploited. ..."
"... Once the NSA has access to the same machine a sys admin does, American spies can mine for a trove of possibly invaluable information, including maps of entire networks, log-in credentials, lists of customers and other details about how systems are wired. In turn, the NSA has found yet another way to, in theory, watch over all traffic on a targeted network. ..."
"... "Up front, sys admins generally are not my end target. My end target is the extremist/terrorist or government official that happens to be using the network some admin takes care of," the NSA employee says in the documents. ..."
"... "A key part of the protections that apply to both US persons and citizens of other countries is the mandate that information be in support of a valid foreign intelligence requirement, and comply with US Attorney General-approved procedures to protect privacy rights." ..."
"... Coincidentally, outgoing-NSA Director Keith Alexander said last year that he was working on drastically cutting the number of sys admins at that agency by upwards of 90 percent - but didn't say it was because they could be exploited by similar tactics waged by adversarial intelligence groups. ..."
In its quest to take down suspected terrorists and criminals abroad, the United States National Security Agency has adopted the
practice of hacking the system administrators that oversee private computer networks, new documents reveal.
In its quest to take down suspected terrorists and criminals abroad, the United States National Security Agency has adopted the
practice of hacking the system administrators that oversee private computer networks, new documents reveal.
The Intercept has published a handful of leaked
screenshots taken from an internal
NSA message board where one spy agency specialist spoke extensively about compromising not the computers of specific targets, but
rather the machines of the system administrators who control entire networks.
Journalist Ryan Gallagher reported that Edward Snowden, a former sys
admin for NSA contractor Booz Allen Hamilton, provided The Intercept with the internal documents, including one from 2012 that's
bluntly titled "I hunt sys admins."
According to the posts - some labeled "top secret" - NSA staffers should not shy away from hacking sys admins: a successful offensive
mission waged against an IT professional with extensive access to a privileged network could provide the NSA with unfettered capabilities,
the analyst acknowledged.
"Who better to target than the person that already has the 'keys to the kingdom'?" one of the posts reads.
"They were written by an NSA official involved in the agency's effort to break into foreign network routers, the devices that
connect computer networks and transport data across the Internet," Gallagher wrote for the article published late Thursday.
"By infiltrating the computers of system administrators who work for foreign phone and Internet companies, the NSA can gain access
to the calls and emails that flow over their networks."
Since last June, classified NSA materials taken
by Snowden and provided to certain journalists have exposed an increasing number of previously-secret surveillance operations that
range from purposely degrading international encryption standards and implanting malware in targeted machines, to tapping into fiber-optic
cables that transfer internet traffic and even vacuuming up data as its moved into servers in a decrypted state.
The latest leak suggests that some NSA analysts took a much different approach when tasked with trying to collect signals intelligence
that otherwise might not be easily available. According to the posts, the author advocated for a technique that involves identifying
the IP address used by the network's sys admin, then scouring other NSA tools to see what online accounts used those addresses to
log-in. Then by using apreviously-disclosed NSA toolthat tricks targets into installing malware by being misdirected to fake Facebook servers, the intelligence analyst can hope that
the sys admin's computer is sufficiently compromised and exploited.
Once the NSA has access to the same machine a sys admin does, American spies can mine for a trove of possibly invaluable information,
including maps of entire networks, log-in credentials, lists of customers and other details about how systems are wired. In turn,
the NSA has found yet another way to, in theory, watch over all traffic on a targeted network.
"Up front, sys admins generally are not my end target. My end target is the extremist/terrorist or government official that
happens to be using the network some admin takes care of," the NSA employee says in the documents.
When reached for comment by The Intercept, NSA spokesperson Vanee Vines said that, "A key part of the protections that apply
to both US persons and citizens of other countries is the mandate that information be in support of a valid foreign intelligence
requirement, and comply with US Attorney General-approved procedures to protect privacy rights."
Coincidentally, outgoing-NSA Director Keith Alexander said last year that he was working on drastically cutting the number of
sys admins at that agency by upwards of 90 percent - but didn't say it was because they could be exploited by similar tactics waged
by adversarial intelligence groups. Gen. Alexander's decision came just weeks after Snowden - previously one of around 1,000 sys
admins working on the NSA's networks, according to
Reuters -
walked away from his role managing those networks with a trove of classified information.
This aritcle is two years old and not much happned during those two years. But still there is a chance that highly authomated factories
can make manufacturing in the USA again profitable. the problme is that they will be even more profible in East Asia;-)
The rise of technologies such as 3-D printing and advanced robotics means that the next few decades for Asia's economies will
not be as easy or promising as the previous five.
OWEN HARRIES, the first editor, together with Robert Tucker, of The National Interest, once reminded me that experts-economists,
strategists, business leaders and academics alike-tend to be relentless followers of intellectual fashion, and the learned, as Harold
Rosenberg famously put it, a "herd of independent minds." Nowhere is this observation more apparent than in the prediction that we
are already into the second decade of what will inevitably be an "Asian Century"-a widely held but rarely examined view that Asia's
continued economic rise will decisively shift global power from the Atlantic to the western Pacific Ocean.
No doubt the numbers appear quite compelling. In 1960, East Asia accounted for a mere 14 percent of global GDP; today that figure
is about 27 percent. If linear trends continue, the region could account for about 36 percent of global GDP by 2030 and over half
of all output by the middle of the century. As if symbolic of a handover of economic preeminence, China, which only accounted for
about 5 percent of global GDP in 1960, will likely surpass the United States as the largest economy in the world over the next decade.
If past record is an indicator of future performance, then the "Asian Century" prediction is close to a sure thing.
Providing a great GUI for complex routers or Linux admin is hard. Of course there has to be a CLI, that's how pros get the job
done. But a great GUI is one that teaches a new user to eventually graduate to using CLI.
Notable quotes:
"... Providing a great GUI for complex routers or Linux admin is hard. Of course there has to be a CLI, that's how pros get the job done. But a great GUI is one that teaches a new user to eventually graduate to using CLI. ..."
"... What would be nice is if the GUI could automatically create a shell script doing the change. That way you could (a) learn about how to do it per CLI by looking at the generated shell script, and (b) apply the generated shell script (after proper inspection, of course) to other computers. ..."
"... AIX's SMIT did this, or rather it wrote the commands that it executed to achieve what you asked it to do. This meant that you could learn: look at what it did and find out about which CLI commands to run. You could also take them, build them into a script, copy elsewhere, ... I liked SMIT. ..."
"... Cisco's GUI stuff doesn't really generate any scripts, but the commands it creates are the same things you'd type into a CLI. And the resulting configuration is just as human-readable (barring any weird naming conventions) as one built using the CLI. I've actually learned an awful lot about the Cisco CLI by using their GUI. ..."
"... Microsoft's more recent tools are also doing this. Exchange 2007 and newer, for example, are really completely driven by the PowerShell CLI. The GUI generates commands and just feeds them into PowerShell for you. So you can again issue your commands through the GUI, and learn how you could have done it in PowerShell instead. ..."
"... Moreover, the GUI authors seem to have a penchant to find new names for existing CLI concepts. Even worse, those names are usually inappropriate vagueries quickly cobbled together in an off-the-cuff afterthought, and do not actually tell you where the doodad resides in the menu system. With a CLI, the name of the command or feature set is its location. ..."
"... I have a cheap router with only a web gui. I wrote a two line bash script that simply POSTs the right requests to URL. Simply put, HTTP interfaces, especially if they implement the right response codes, are actually very nice to script. ..."
Deep End's Paul Venezia speaks out against the
overemphasis on GUIs in today's admin tools,
saying that GUIs are fine and necessary in many cases, but only after a complete CLI is in place, and that they cannot interfere
with the use of the CLI, only complement it. Otherwise, the GUI simply makes easy things easy and hard things much harder. He writes,
'If you have to make significant, identical changes to a bunch of Linux servers, is it easier to log into them one-by-one and run
through a GUI or text-menu tool, or write a quick shell script that hits each box and either makes the changes or simply pulls down
a few new config files and restarts some services? And it's not just about conservation of effort - it's also about accuracy. If
you write a script, you're certain that the changes made will be identical on each box. If you're doing them all by hand, you aren't.'"
Providing a great GUI for complex routers or Linux admin is hard. Of course there has to be a CLI, that's how pros get
the job done. But a great GUI is one that teaches a new user to eventually graduate to using CLI.
A bad GUI with no CLI is the worst of both worlds, the author of the article got that right. The 80/20 rule applies: 80% of
the work is common to everyone, and should be offered with a GUI. And the 20% that is custom to each sysadmin, well use the CLI.
maxwell demon:
What would be nice is if the GUI could automatically create a shell script doing the change. That way you could (a) learn
about how to do it per CLI by looking at the generated shell script, and (b) apply the generated shell script (after proper inspection,
of course) to other computers.
0123456 (636235) writes:
What would be nice is if the GUI could automatically create a shell script doing the change.
While it's not quite the same thing, our GUI-based home router has an option to download the config as a text file so you can
automatically reconfigure it from that file if it has to be reset to defaults. You could presumably use sed to change IP addresses,
etc, and copy it to a different router. Of course it runs Linux.
Alain Williams:
AIX's SMIT did this, or rather it wrote the commands that it executed to achieve what you asked it to do. This meant that
you could learn: look at what it did and find out about which CLI commands to run. You could also take them, build them into a
script, copy elsewhere, ... I liked SMIT.
Ephemeriis:
What would be nice is if the GUI could automatically create a shell script doing the change. That way you could (a) learn
about how to do it per CLI by looking at the generated shell script, and (b) apply the generated shell script (after proper inspection,
of course) to other computers.
Cisco's GUI stuff doesn't really generate any scripts, but the commands it creates are the same things you'd type into
a CLI. And the resulting configuration is just as human-readable (barring any weird naming conventions) as one built using the
CLI. I've actually learned an awful lot about the Cisco CLI by using their GUI.
We've just started working with Aruba hardware. Installed a mobility controller last week. They've got a GUI that does something
similar. It's all a pretty web-based front-end, but it again generates CLI commands and a human-readable configuration. I'm still
very new to the platform, but I'm already learning about their CLI through the GUI. And getting work done that I wouldn't be able
to if I had to look up the CLI commands for everything.
Microsoft's more recent tools are also doing this. Exchange 2007 and newer, for example, are really completely driven by
the PowerShell CLI. The GUI generates commands and just feeds them into PowerShell for you. So you can again issue your commands
through the GUI, and learn how you could have done it in PowerShell instead.
Anpheus:
Just about every Microsoft tool newer than 2007 does this. Virtual machine manager, SQL Server has done it for ages, I think
almost all the system center tools do, etc.
It's a huge improvement.
PoV:
All good admins document their work (don't they? DON'T THEY?). With a CLI or a script that's easy: it comes down to "log in
as user X, change to directory Y, run script Z with arguments A B and C - the output should look like D". Try that when all you
have is a GLUI (like a GUI, but you get stuck): open this window, select that option, drag a slider, check these boxes, click
Yes, three times. The output might look a little like this blurry screen shot and the only record of a successful execution is
a window that disappears as soon as the application ends.
I suppose the Linux community should be grateful that windows made the fundemental systems design error of making everything
graphic. Without that basic failure, Linux might never have even got the toe-hold it has now.
skids:
I think this is a stronger point than the OP: GUIs do not lead to good documentation. In fact, GUIs pretty much are limited
to procedural documentation like the example you gave.
The best they can do as far as actual documentation, where the precise effect of all the widgets is explained, is a screenshot
with little quote bubbles pointing to each doodad. That's a ridiculous way to document.
This is as opposed to a command reference which can organize, usually in a pretty sensible fashion, exact descriptions of what
each command does.
Moreover, the GUI authors seem to have a penchant to find new names for existing CLI concepts. Even worse, those names
are usually inappropriate vagueries quickly cobbled together in an off-the-cuff afterthought, and do not actually tell you where
the doodad resides in the menu system. With a CLI, the name of the command or feature set is its location.
Not that even good command references are mandatory by today's pathetic standards. Even the big boys like Cisco have shown
major degradation in the quality of their documentation during the last decade.
pedantic bore:
I think the author might not fully understand who most admins are. They're people who couldn't write a shell script if their
lives depended on it, because they've never had to. GUI-dependent users become GUI-dependent admins.
As a percentage of computer users, people who can actually navigate a CLI are an ever-diminishing group.
/etc/init.d/NetworkManager stop
chkconfig NetworkManager off
chkconfig network on
vi/etc/sysconfig/network
vi/etc/sysconfig/network-scripts/eth0
At least they named it NetworkManager, so experienced admins could recognize it as a culprit. Anything named in CamelCase is
almost invariably written by new school programmers who don't grok the Unix toolbox concept and write applications instead of
tools, and the bloated drivel is usually best avoided.
Darkness404 (1287218) writes: on Monday October 04, @07:21PM (#33789446)
There are more and more small businesses (5, 10 or so employees) realizing that they can get things done easier if they had
a server. Because the business can't really afford to hire a sysadmin or a full-time tech person, its generally the employee who
"knows computers" (you know, the person who has to help the boss check his e-mail every day, etc.) and since they don't have the
knowledge of a skilled *Nix admin, a GUI makes their administration a lot easier.
So with the increasing use of servers among non-admins, it only makes sense for a growth in GUI-based solutions.
Svartalf (2997) writes: Ah... But the thing is... You don't NEED the GUI with recent Linux systems- you do with Windows.
oatworm (969674) writes: on Monday October 04, @07:38PM (#33789624) Homepage
Bingo. Realistically, if you're a company with less than a 100 employees (read: most companies), you're only going to have
a handful of servers in house and they're each going to be dedicated to particular roles. You're not going to have 100 clustered
fileservers - instead, you're going to have one or maybe two. You're not going to have a dozen e-mail servers - instead, you're
going to have one or two. Consequently, the office admin's focus isn't going to be scalability; it just won't matter to the admin
if they can script, say, creating a mailbox for 100 new users instead of just one. Instead, said office admin is going to be more
focused on finding ways to do semi-unusual things (e.g. "create a VPN between this office and our new branch office", "promote
this new server as a domain controller", "install SQL", etc.) that they might do, oh, once a year.
The trouble with Linux, and I'm speaking as someone who's used YaST in precisely this context, is that you have to make a choice
- do you let the GUI manage it or do you CLI it? If you try to do both, there will be inconsistencies because the grammar of the
config files is too ambiguous; consequently, the GUI config file parser will probably just overwrite whatever manual changes it
thinks is "invalid", whether it really is or not. If you let the GUI manage it, you better hope the GUI has the flexibility necessary
to meet your needs. If, for example, YaST doesn't understand named Apache virtual hosts, well, good luck figuring out where it's
hiding all of the various config files that it was sensibly spreading out in multiple locations for you, and don't you dare use
YaST to manage Apache again or it'll delete your Apache-legal but YaST-"invalid" directive.
The only solution I really see is for manual config file support with optional XML (or some other machine-friendly but still
human-readable format) linkages. For example, if you want to hand-edit your resolv.conf, that's fine, but if the GUI is going
to take over, it'll toss a directive on line 1 that says "#import resolv.conf.xml" and immediately overrides (but does not overwrite)
everything following that. Then, if you still want to use the GUI but need to hand-edit something, you can edit the XML file using
the appropriate syntax and know that your change will be reflected on the GUI.
That's my take. Your mileage, of course, may vary.
icebraining (1313345) writes: on Monday October 04, @07:24PM (#33789494) Homepage
I have a cheap router with only a web gui. I wrote a two line bash script that simply POSTs the right requests to URL.
Simply put, HTTP interfaces, especially if they implement the right response codes, are actually very nice to script.
devent (1627873) writes:
Why Windows servers have a GUI is beyond me anyway. The servers are running 99,99% of the time without a monitor and normally
you just login per ssh to a console if you need to administer them. But they are consuming the extra RAM, the extra CPU cycles
and the extra security threats. I don't now, but can you de-install the GUI from a Windows server? Or better, do you have an option
for no-GUI installation? Just saw the minimum hardware requirements. 512 MB RAM and 32 GB or greater disk space. My server runs
sirsnork (530512) writes: on Monday October 04, @07:43PM (#33789672)
it's called a "core" install in Server 2008 and up, and if you do that, there is no going back, you can't ever add the GUI
back.
What this means is you can run a small subset of MS services that don't need GUI interaction. With R2 that subset grew somwhat
as they added the ability to install .Net too, which mean't you could run IIS in a useful manner (arguably the strongest reason
to want to do this in the first place).
Still it's a one way trip and you better be damn sure what services need to run on that box for the lifetime of that box or
you're looking at a reinstall. Most windows admins will still tell you the risk isn't worth it.
Simple things like network configuration without a GUI in windows is tedious, and, at least last time i looked, you lost the
ability to trunk network poers because the NIC manufactuers all assumed you had a GUI to configure your NICs
prichardson (603676) writes: on Monday October 04, @07:27PM (#33789520) Journal
This is also a problem with Max OS X Server. Apple builds their services from open source products and adds a GUI for configuration
to make it all clickable and easy to set up. However, many options that can be set on the command line can't be set in the GUI.
Even worse, making CLI changes to services can break the GUI entirely.
The hardware and software are both super stable and run really smoothly, so once everything gets set up, it's awesome. Still,
it's hard for a guy who would rather make changes on the CLI to get used to.
MrEricSir (398214) writes:
Just because you're used to a CLI doesn't make it better. Why would I want to read a bunch of documentation, mess with command
line options, then read whole block of text to see what it did? I'd much rather sit back in my chair, click something, and then
see if it worked. Don't make me read a bunch of man pages just to do a simple task. In essence, the question here is whether it's
okay for the user to be lazy and use a GUI, or whether the programmer should be too lazy to develop a GUI.
ak_hepcat (468765) writes: <[email protected] minus author> on Monday October 04, @07:38PM (#33789626) Homepage Journal
Probably because it's also about the ease of troubleshooting issues.
How do you troubleshoot something with a GUI after you've misconfigured? How do you troubleshoot a programming error (bug)
in the GUI -> device communication? How do you scale to tens, hundreds, or thousands of devices with a GUI?
CLI makes all this easier and more manageable.
arth1 (260657) writes:
Why would I want to read a bunch of documentation, mess with command line options, then read whole block of text to see what
it did? I'd much rather sit back in my chair, click something, and then see if it worked. Don't make me read a bunch of man pages
just to do a simple task. Because then you'll be stuck at doing simple tasks, and will never be able to do more advanced tasks.
Without hiring a team to write an app for you instead of doing it yourself in two minutes, that is. The time you spend reading
man
fandingo (1541045) writes: on Monday October 04, @07:54PM (#33789778)
I don't think you really understand systems administration. 'Users,' or in this case admins, don't typically do stuff once.
Furthermore, they need to know what he did and how to do it again (i.e. new server or whatever) or just remember what he did.
One-off stuff isn't common and is a sign of poor administration (i.e. tracking changes and following processes).
What I'm trying to get at is that admins shouldn't do anything without reading the manual. As a Windows/Linux admin, I tend
to find Linux easier to properly administer because I either already know how to perform an operation or I have to read the manual
(manpage) and learn a decent amount about the operation (i.e. more than click here/use this flag).
Don't get me wrong, GUIs can make unknown operations significantly easier, but they often lead to poor process management.
To document processes, screenshots are typically needed. They can be done well, but I find that GUI documentation (created by
admins, not vendor docs) tend to be of very low quality. They are also vulnerable to 'upgrades' where vendors change the interface
design. CLI programs typically have more stable interfaces, but maybe that's just because they have been around longer...
maotx (765127) writes: <[email protected]> on Monday October 04, @07:42PM (#33789666)
That's one thing Microsoft did right with Exchange 2007. They built it entirely around their new powershell CLI and then built
a GUI for it. The GUI is limited in compared to what you can do with the CLI, but you can get most things done. The CLI becomes
extremely handy for batch jobs and exporting statistics to csv files. I'd say it's really up there with BASH in terms of scripting,
data manipulation, and integration (not just Exchange but WMI, SQL, etc.)
They tried to do similar with Windows 2008 and their Core [petri.co.il] feature, but they still have to load a GUI to present
a prompt...Reply to This
Charles Dodgeson (248492) writes: <[email protected]> on Monday October 04, @08:51PM (#33790206) Homepage Journal
Probably Debian would have been OK, but I was finding admin of most Linux distros a pain for exactly these reasons.
I couldn't find a layer where I could do everything that I needed to do without worrying about one thing stepping on another.
No doubt there are ways that I could manage a Linux system without running into different layers of management tools stepping
on each other, but it was a struggle.
There were other reasons as well (although there is a lot that I miss about Linux), but I think that this was one of the leading
reasons.
(NB: I realize that this is flamebait (I've got karma to burn), but that isn't my intention here.)
"... Imagine working at HP and having to listen to Carly Fiorina bulldoze you...she is like a blow-torch...here are 4 minutes of Carly and Ralph Nader (if you can take it): https://www.youtube.com/watch?v=vC4JDwoRHtk ..."
"... My husband has been a software architect for 30 years at the same company. Never before has he seen the sheer unadulterated panic in the executives. All indices are down and they are planning for the worst. Quality is being sacrificed for " just get some relatively functional piece of shit out the door we can sell". He is fighting because he has always produced a stellar product and refuses to have shit tied to his name ( 90% of competitor benchmarks fail against his projects). They can't afford to lay him off, but the first time in my life I see my husband want to quit... ..."
"... HP basically makes computer equipment (PCs, servers, Printers) and software. Part of the problem is that computer hardware has been commodized. Since PCs are cheap and frequent replacements are need, People just by the cheapest models, expecting to toss it in a couple of years and by a newer model (aka the Flat screen TV model). So there is no justification to use quality components. Same is become true with the Server market. Businesses have switched to virtualization and/or cloud systems. So instead of taking a boat load of time to rebuild a crashed server, the VM is just moved to another host. ..."
"... I hung an older sign next to the one saying Information Technology. Somehow MIS-Information Technology seemed appropriate.) ..."
"... Then I got to my first duty assignment. It was about five months after the first moon landing, and the aerospace industry was facing cuts in government aerospace spending. I picked up a copy of an engineering journal in the base library and found an article about job cuts. There was a cartoon with two janitors, buckets at their feet and mops in their hands, standing before a blackboard filled with equations. Once was saying to the other, pointing to one section, "you can see where he made his mistake right here...". It represented two engineers who had been reduced to menial labor after losing their jobs. ..."
"... So while I resent all the H1Bs coming into the US - I worked with several for the last four years of my IT career, and was not at all impressed - and despise the politicians who allow it, I know that it is not the first time American STEM grads have been put out of jobs en masse. In some ways that old saying applies: the more things change, the more they stay the same ..."
"... Just like Amazon, HP will supposedly make billions in profit analyzing things in the cloud that nobody looks at and has no use to the real economy, but it makes good fodder for Power Point presentations. I am amazed how much daily productivity goes into creating fancy charts for meetings that are meaningless to the actual business of the company. ..."
"... 'Computers' cost as much - if not more time than they save, at least in corporate settings. Used to be you'd work up 3 budget projections - expected, worst case and best case, you'd have a meeting, hash it out and decide in a week. Now you have endless alternatives, endless 'tweaking' and changes and decisions take forever, with outrageous amounts of time spent on endless 'analysis' and presentations. ..."
"... A recent lay off here turned out to be quite embarrassing for Parmalat there was nobody left that knew how to properly run the place they had to rehire many ex employees as consultants-at a costly premium ..."
"... HP is laying off 80,000 workers or almost a third of its workforce, converting its long-term human capital into short-term gains for rich shareholders at an alarming rate. The reason that product quality has declined is due to the planned obsolescence that spurs needless consumerism, which is necessary to prop up our debt-backed monetary system and the capitalist-owned economy that sits on top of it. ..."
"... The world is heading for massive deflation. Computers have hit the 14 nano-meter lithography zone, the cost to go from 14nm to say 5nm is very high, and the net benefit to computing power is very low, but lets say we go from 14nm to 5nm over the next 4 years. Going from 5nm to 1nm is not going to net a large boost in computing power and the cost to shrink things down and re-tool will be very high for such an insignificant gain in performance. ..."
"... Another classic "Let's rape all we can and bail with my golden parachute" corporate leaders setting themselves up. Pile on the string of non-IT CEOs that have been leading the company to ruin. To them it is nothing more than a contest of being even worse than their predecessor. Just look at the billions each has lost before their exit. Compaq, a cluster. Palm Pilot, a dead product they paid millions for and then buried. And many others. ..."
"... Let's not beat around the bush, they're outsourcing, firing Americans and hiring cheap labor elsewhere: http://www.bloomberg.com/news/articles/2015-09-15/hewlett-packard-to-cut-up-to-30-000-more-jobs-in-restructuring It's also shifting employees to low-cost areas, and hopes to have 60 percent of its workers located in cheaper countries by 2018, Nefkens said. ..."
"... Carly Fiorina: (LOL, leading a tech company with a degree in medieval history and philosophy) While at ATT she was groomed from the Affirmative Action plan. ..."
"... It is very straightforward. Replace 45,000 US workers with 100,000 offshore workers and you still save millions of USD ! Use the "savings" to buy back stock, then borrow more $$ at ZIRP to buy more stock back. ..."
"... If you look on a site like LinkedIN, it will always say 'We're hiring!'. YES, HP is hiring.....but not YOU, they want Ganesh Balasubramaniamawapbapalooboopawapbamboomtuttifrutti, so that they can work him as modern day slave labor for ultra cheap. We can thank idiot 'leaders' like Meg Pasty Faced Whitman and Bill 'Forced Vaccinations' Gates for lobbying Congress for decades, against the rights of American workers. ..."
"... An era of leadership in computer technology has died, and there is no grave marker, not even a funeral ceremony or eulogy ... Hewlett-Packard, COMPAQ, Digital Equipment Corp, UNIVAC, Sperry-Rand, Data General, Tektronix, ZILOG, Advanced Micro Devices, Sun Microsystems, etc, etc, etc. So much change in so short a time, leaves your mind dizzy. ..."
yeah thanks Carly ... HP made bullet-proof products that would last forever..... I still buy HP workstation notebooks, especially
now when I can get them for $100 on ebay .... I sold HP products in the 1990s .... we had HP laserjet IIs that companies would
run day & night .... virtually no maintenance ... when PCL5 came around then we had LJ IIIs .... and still companies would call
for LJ I's, .... 100 pounds of invincible Printing ! .
This kind of product has no place in the World of Planned-Obsolesence .... I'm currently running an 8510w, 8530w, 2530p, Dell
6420 quad i7, hp printers hp scanners, hp pavilion desktops, .... all for less than what a Laserjet II would have cost in 1994,
Total.
Not My Real Name
I still have my HP 15C scientific calculator I bought in 1983 to get me through college for my engineering degree. There is
nothing better than a hand held calculator that uses Reverse Polish Notation!
BigJim
HP used to make fantastic products. I remember getting their RPN calculators back in th 80's; built like tanks. Then they decided
to "add value" by removing more and more material from their consumer/"prosumer" products until they became unspeakably flimsy.
They stopped holding things together with proper fastenings and starting hot melting/gluing it together, so if it died you had
to cut it open to have any chance of fixing it.
I still have one of their Laserjet 4100 printers. I expect it to outlast anything they currently produce, and it must be going
on 16+ years old now.
Fuck you, HP. You started selling shit and now you're eating through your seed corn. I just wish the "leaders" who did this
to you had to pay some kind of penalty greater than getting $25M in a severance package.
Automatic Choke
+100. The path of HP is everything that is wrong about modern business models. I still have a 5MP laserjet (one of the first),
still works great. Also have a number of 42S calculators.....my day-to-day workhorse and several spares. I don't think the present
HP could even dream of making these products today.
nope-1004
How well will I profit, as a salesman, if I sell you something that works? How valuable are you, as a customer in my database,
if you never come back? Confucious say "Buy another one, and if you can't afford it, f'n finance it!" It's the growing trend.
Look at appliances. Nothing works anymore.
hey big brother.... if you are curious, there is a damn good android emulator of the HP42S available (Free42). really it is
so good that it made me relax about accumulating more spares. still not quite the same as a real calculator. (the 42S, by the
way, is the modernization/simplification of the classic HP41, the real hardcord very-programmable, reconfigurable, hackable unit
with all the plug-in-modules that came out in the early 80s.)
Miss Expectations
Imagine working at HP and having to listen to Carly Fiorina bulldoze you...she is like a blow-torch...here are 4 minutes
of Carly and Ralph Nader (if you can take it): https://www.youtube.com/watch?v=vC4JDwoRHtk
Miffed Microbiologist
My husband has been a software architect for 30 years at the same company. Never before has he seen the sheer unadulterated
panic in the executives. All indices are down and they are planning for the worst. Quality is being sacrificed for " just get
some relatively functional piece of shit out the door we can sell". He is fighting because he has always produced a stellar product
and refuses to have shit tied to his name ( 90% of competitor benchmarks fail against his projects). They can't afford to lay
him off, but the first time in my life I see my husband want to quit...
unplugged
I've been an engineer for 31 years - our managements's unspoken motto at the place I'm at (large company) is: "release it now,
we'll put in the quality later". I try to put in as much as possible before the product is shoved out the door without killing
myself doing it.
AGuy
Do they even make test equipment anymore?
HP test and measurement was spun off many years ago as Agilent. The electronics part of Agilent was spun off as keysight late
last year.
HP basically makes computer equipment (PCs, servers, Printers) and software. Part of the problem is that computer hardware
has been commodized. Since PCs are cheap and frequent replacements are need, People just by the cheapest models, expecting to
toss it in a couple of years and by a newer model (aka the Flat screen TV model). So there is no justification to use quality
components. Same is become true with the Server market. Businesses have switched to virtualization and/or cloud systems. So instead
of taking a boat load of time to rebuild a crashed server, the VM is just moved to another host.
HP has also adopted the Computer Associates business model (aka Borg). HP buys up new tech companies and sits on the tech and
never improves it. It decays and gets replaced with a system from a competitor. It also has a habit of buying outdated tech companies
that never generate the revenues HP thinks it will.
BullyBearish
When Carly was CEO of HP, she instituted a draconian "pay for performance" plan. She ended up leaving with over $146 Million
because she was smart enough not to specify "what type" of performance.
GeezerGeek
Regarding your statement "All those engineers choosing to pursue other opportunities", we need to realize that tech in general
has been very susceptible to the vagaries of government actions. Now the employment problems are due to things like globalization
and H1B programs. Some 50 years ago tech - meaning science and engineering - was hit hard as the US space program wound down.
Permit me this retrospective:
I graduated from a quite good school with a BS in Physics in 1968. My timing was not all that great, since that was when they
stopped granting draft deferments for graduate school. I joined the Air Force, but as an enlisted airman, not an officer. Following
basic training, I was sent to learn to operate PCAM operations. That's Punched Card Accounting Machines. Collators. Sorters. Interpreters.
Key punches. I was in a class with nine other enlistees. One had just gotten a Masters degree in something. Eight of us had a
BS in one thing or another, but all what would now be called STEM fields. The least educated only had an Associate degree. We
all enlisted simply to avoid being drafted into the Marines. (Not that there's anything wrong with the Marines, but all of us
proclaimed an allergy to energetic lead projectiles and acted accordingly. Going to Canada, as many did, pretty much ensured never
getting a job in STEM fields later in life.) So thanks to government action (fighting in VietNam, in this case) a significant
portion of educated Americans found themselves diverted from chosen career paths. (In my case, it worked out fine. I learned to
program, etc., and spent a total of over 40 years in what is now called IT. I think it was called EDP when I started the trek.
Somewhere along the line it became (where I worked) Management Information Systems. MIS. And finally the department became simply
Information Technology. I hung an older sign next to the one saying Information Technology. Somehow MIS-Information Technology
seemed appropriate.)
Then I got to my first duty assignment. It was about five months after the first moon landing, and the aerospace industry
was facing cuts in government aerospace spending. I picked up a copy of an engineering journal in the base library and found an
article about job cuts. There was a cartoon with two janitors, buckets at their feet and mops in their hands, standing before
a blackboard filled with equations. Once was saying to the other, pointing to one section, "you can see where he made his mistake
right here...". It represented two engineers who had been reduced to menial labor after losing their jobs.
So while I resent all the H1Bs coming into the US - I worked with several for the last four years of my IT career, and
was not at all impressed - and despise the politicians who allow it, I know that it is not the first time American STEM grads
have been put out of jobs en masse. In some ways that old saying applies: the more things change, the more they stay the same.
If you made it this far, thanks for your patience.
adr
Just like Amazon, HP will supposedly make billions in profit analyzing things in the cloud that nobody looks at and has
no use to the real economy, but it makes good fodder for Power Point presentations. I am amazed how much daily productivity goes
into creating fancy charts for meetings that are meaningless to the actual business of the company.
IT'S ALL BULLSHIT!!!!!
I designed more products in one year for the small company I work for than a $15 billion corporation did throughout their entire
design department employing hundreds of people. That is because 90% of their workday is spent preparing crap for meetings and
they never really get anything meaningful done.
It took me one week to design a product and send it out for production branded for the company I work for, but it took six
months to get the same type of product passed through the multi billion dollar corporation we license for. Because it had to pass
through layer after layer of bullshit and through every level of management before it could be signed off. Then a month later
somebody would change their mind in middle management and the product would need to be changed and go through the cycle all over
again.
Their own bag department made six bags last year, I designed 16. Funny how I out produce a department of six people whose only
job is to make bags, yet I only get paid the salary of one.
Maybe I'm just an imbecile for working hard.
Bear
You also have to add all the wasted time of employees having to sit through those presentations and the even more wasted time
on Ashley Madison
cynicalskeptic
'Computers' cost as much - if not more time than they save, at least in corporate settings. Used to be you'd work up 3
budget projections - expected, worst case and best case, you'd have a meeting, hash it out and decide in a week. Now you have
endless alternatives, endless 'tweaking' and changes and decisions take forever, with outrageous amounts of time spent on endless
'analysis' and presentations.
EVERY VP now has an 'Administrative Assistant' whose primary job is to develop PowerPoint presentations for the endless meetings
that take up time - without any decisions ever being made.
Computers stop people from thinking. In ages past when you used a slide rule you had to know the order of magnitude of the
end result. Now people make a mistake and come up with a ridiculous number and take it at face value because 'the computer' produced
it.
Any exec worht anythign knew what a given line in their department or the total should be +or a small amount. I can't count
the number of times budgets and analyses were WRONG because someone left off a few lines on a spreadsheet total.
Yes computer modeling for advanced tech and engineering is a help, CAD/CAM is great and many other applications in the tech/scientific
world are a great help but letting computers loose in corporate and finance has produced endless waste AND - worsde - thigns like
HFT (e.g. 'better' more effective ways to manipulate and cheat markets.
khnum
A recent lay off here turned out to be quite embarrassing for Parmalat there was nobody left that knew how to properly
run the place they had to rehire many ex employees as consultants-at a costly premium
Anopheles
Consultants don't come at that much of a premium becaue the company doesn't have to pay benefits, vacation, sick days, or payroll
taxes, etc. Plus it's really easy and cheap to get rid of consultants.
arrowrod
Obviously, you haven't worked as a consultant. You get paid by the hour. To clean up a mess. 100 hours a week are not uncommon.
(What?, is it possible to work 100 hours a week? Yes, it is, but only for about 3 months.)
RaceToTheBottom
HP Executives are trying hard to bring the company back to its roots: The ability to fit into one garage...
PrimalScream
ALL THAT Meg Whitman needs to do ... is to FIRE EVERYBODY !! Then have all the products made in China, process all the sales
orders in Hong Kong, and sub-contract the accounting and tax paperwork to India. Then HP can use all the profits for stock buybacks,
except of course for Meg's salary ... which will keep rising astronomically!
Herdee
That's where education gets you in America.The Government sold out America's manufacturing base to Communist China who holds
the debt of the USA.Who would ever guess that right-wing neo-cons(neo-nazis) running the government would sell out to communists
just to get the money for war? Very weird.
Really20
"Communist"? The Chinese government, like that of the US, never believed in worker ownership of businesses and never believed
that the commerical banking system (whether owned by the state, or private corporations which act like a state) should not control
money. Both countries believe in centralization of power among a few shareholders, who take the fruits of working people's labor
while contributing nothing of value themselves (money being but a token that represents a claim on real capital, not capital itself.)
Management and investors ought to be separate from each other; management should be chosen by workers by universal equal vote,
while a complementary investor board should be chosen by investors much as corporate boards are now. Both of these boards should
be legally independent but bound organizations; the management board should run the business while the investor board should negotiate
with the management board on the terms of equity issuance. No more buybacks, no more layoffs or early retirements, unless workers
as a whole see a need for it to maintain the company.
The purpose of investors is to serve the real economy, not the other way round; and in turn, the purpose of the real economy
is to serve humanity, not the other way around. Humans should stop being slaves to perpetual growth.
Really20
HP is laying off 80,000 workers or almost a third of its workforce, converting its long-term human capital into short-term
gains for rich shareholders at an alarming rate. The reason that product quality has declined is due to the planned obsolescence
that spurs needless consumerism, which is necessary to prop up our debt-backed monetary system and the capitalist-owned economy
that sits on top of it.
NoWayJose
HP - that company that sells computers and printers made in China and ink cartridges made in Thailand?
Dominus Ludificatio
Another company going down the drain because their focus is short term returns with crappy products.They will also bring down
any company they buy as well.
Barnaby
HP is microcosm of what Carly will do to the US: carve it like a pumpkin and leave the shell out to bake in the sun for a few
weeks. But she'll make sure and poison the seeds too! Don't want anything growing out of that pesky Palm division...
Dre4dwolf
The world is heading for massive deflation. Computers have hit the 14 nano-meter lithography zone, the cost to go from
14nm to say 5nm is very high, and the net benefit to computing power is very low, but lets say we go from 14nm to 5nm over the
next 4 years. Going from 5nm to 1nm is not going to net a large boost in computing power and the cost to shrink things down and
re-tool will be very high for such an insignificant gain in performance.
What does that mean
Computers (atleast non-quantum ones) have hit the point where about 80-90% of the potential for the current science has
been tap'd
This means that the consumer is not going to be put in the position where they will have to upgrade to faster systems for
atleast another 7-8 years.... (because the new computer wont be that much faster than their existing one).
If no one is upgrading the only IT sectors of the economy that stand to make any money are software companies (Microsoft,
Apple, and other small software developers), most software has not caught up with hardware yet.
We are obviously heading for massive deflation, consumer spending levels as a % are probably around where they were in
the late 70s - mid 80s, this is a very deflationary environment that is being compounded by a high debt burden (most of everyones
income is going to service their debts), that signals monetary tightening is going on... people simply don't have enough discretionary
income to spend on new toys.
All that to me screams SELL consumer electronics stocks because profits are GOING TO DECLINE , SALES ARE GOING TO DECLINE.
There is no way , no amount of buy backs will float the stocks of corporations like HP/Dell/IBM etc... it is inevitable that these
stocks will be worth 30% less over the next 5 - 8 years
But what do I know? maybe I am missing something.
In anycase a lot of pressure is being put on HP to do all it can at any cost to boost the stock valuations, because so much
of its stock is institution owned, they will strip the wallpaper off the walls and sell it to a recycling plant if it would give
them more money to boost stock valuations. That to me signals that most of the people pressuring the board of HP to boost the
stock, want them to gut the company as much as they can to boost it some trivial % points so that the majority of shares can be
dumped onto muppets.
To me it pretty much also signals something is terribly wrong at HP and no one is talking about it.
PoasterToaster
Other than die shrinks there really hasn't been a lot going on in the CPU world since Intel abandoned its Netburst architecture
and went back to its (Israeli created) Pentium 3 style pipeline. After that they gave up on increasing speed and resorted to selling
more cores. Now that wall has been hit, they have been selling "green" and "efficient" nonsense in place of increasing power.
x86 just needs to go, but a lot is invested in it not the least of which is that 1-2 punch of forced, contrived obsolesence
carried out in a joint operation with Microsoft. 15 years ago you could watch videos with no problem on your old machine using
Windows XP. Fast forward to now and their chief bragging point is still "multitasking" and the ability to process datastreams
like video. It's a joke.
The future is not in the current CPU paradigm of instructions per second; it will be in terms of variables per second. It will
be more along the lines of what GPU manufacturers are creating with their thousands of "engines" or "processing units" per chip,
rather than the 4, 6 or 12 core monsters that Intel is pushing. They have nearly given up on their roadmap to push out to 128
cores as it is. x86 just doesn't work with all that.
Dojidog
Another classic "Let's rape all we can and bail with my golden parachute" corporate leaders setting themselves up. Pile
on the string of non-IT CEOs that have been leading the company to ruin. To them it is nothing more than a contest of being even
worse than their predecessor. Just look at the billions each has lost before their exit. Compaq, a cluster. Palm Pilot, a dead
product they paid millions for and then buried. And many others.
Think the split is going to help? Think again. Rather than taking the opportunity to fix their problems, they have just duplicated
and perpetuated them into two separate entities.
HP is a company that is mired in a morass of unmanageable business processes and patchwork of antiquated applications all interconnected
to the point they are petrified to try and uncouple them.
Just look at their stock price since January. The insiders know. Want to fix HP? All it would take is a savvy IT based leader
with a boatload of common sense. What makes money at HP? Their printers and ink. Not thinking they can provide enterprise solutions
to others when they can't even get their own house in order.
Carly Fiorina: (LOL, leading a tech company with a degree in medieval history and philosophy) While at ATT she was groomed
from the Affirmative Action plan.
Alma Mater: Stanford University (B.A. in medieval history and philosophy); University of Maryland (MBA); Massachusetts Institute
of Technology
Patricia Russo: (Lucent) (Dedree in Political Science). Another lady elevated through the AA plan, Russo got her bachelor's
degree from Georgetown University in political science and history in 1973. She finished the advanced management program at Harvard
Business School in 1989
Both ladies steered their corporations to failure.
Clowns on Acid
It is very straightforward. Replace 45,000 US workers with 100,000 offshore workers and you still save millions of USD
! Use the "savings" to buy back stock, then borrow more $$ at ZIRP to buy more stock back.
You guys don't know nuthin'.
homiegot
HP: one of the worst places you could work. Souless.
Pancho de Villa
Ladies and Gentlemen! Integrity has left the Building!
space junk
I worked there for a while and it was total garbage. There are still some great folks around, but they are getting paid less
and less, and having to work longer hours for less pay while reporting to God knows who, often a foreigner with crappy engrish
skills, yes likely another 'diversity hire'. People with DEEP knowledge, decades and decades, have either gotten unfairly fired
or demoted, made to quit, or if they are lucky, taken some early retirement and GTFO (along with their expertise - whoopsie! who
knew? unintended consequences are a bitch aren't they? )....
If you look on a site like LinkedIN, it will always say 'We're hiring!'. YES, HP is hiring.....but not YOU, they want Ganesh
Balasubramaniamawapbapalooboopawapbamboomtuttifrutti, so that they can work him as modern day slave labor for ultra cheap. We
can thank idiot 'leaders' like Meg Pasty Faced Whitman and Bill 'Forced Vaccinations' Gates for lobbying Congress for decades,
against the rights of American workers.
Remember that Meg 'Pasty Faced' Whitman is the person who came up with the idea of a 'lights out' datacenter....that's right,
it's the concept of putting all of your computers in a building, in racks, in the dark, and maybe hiring an intern to come in
once a month and keep them going. This is what she actually believed. Along with her other statement to the HP workforce which
says basically that the future of HP is one of total automation.....TRANSLATION: If you are a smart admin, engineer, project manager,
architect, sw tester, etc.....we (HP management) think you are an IDIOT and can be replaced by a robot, a foreigner, or any other
cheap worker.
Race to the bottom is like they say a space ship approaching a black hole......after a while the laws of physics and common
sense, just don't apply anymore.
InnVestuhrr
An era of leadership in computer technology has died, and there is no grave marker, not even a funeral ceremony or eulogy
... Hewlett-Packard, COMPAQ, Digital Equipment Corp, UNIVAC, Sperry-Rand, Data General, Tektronix, ZILOG, Advanced Micro Devices,
Sun Microsystems, etc, etc, etc. So much change in so short a time, leaves your mind dizzy.
"... By David Masciotra, the author of Mellencamp: American Troubadour (University Press of Kentucky). He has also written for Salon, the Atlantic and the Los Angeles Review of Books. For more information visit www.davidmasciotra.com. Originally published at Alternet ..."
"... Robert Reich, in his book Supercapitalism, explains that in the past 30 years the two industries with the most excessive increases in prices are health care and higher education. ..."
"... Using student loan loot and tax subsidies backed by its $3.5 billion endowment, New York University has created a new administrative class of aristocratic compensation. The school not only continues to hire more administrators – many of whom the professors indict as having no visible value in improving the education for students bankrupting themselves to register for classes – but shamelessly increases the salaries of the academic administrative class. The top 21 administrators earn a combined total of $23,590,794 per year. The NYU portfolio includes many multi-million-dollar mansions and luxury condos, where deans and vice presidents live rent-free. ..."
"... As the managerial class grows, in size and salary, so does the full time faculty registry shrink. Use of part time instructors has soared to stratospheric heights at NYU. Adjunct instructors, despite having a minimum of a master's degree and often having a Ph.D., receive only miserly pay-per-course compensation for their work, and do not receive benefits. Many part-time college instructors must transform their lives into daily marathons, running from one school to the next, barely able to breathe between commutes and courses. Adjunct pay varies from school to school, but the average rate is $2,900 per course. ..."
"... New York Times ..."
"... to the people making decisions ..."
"... it's the executives and management generally. Just like Wall Street, many of these top administrators have perfected the art of failing upwards. ..."
"... What is the benefit? What are the risks? ..."
"... Sophomore Noell Conley lives there, too. She shows off the hotel-like room she shares with a roommate . ..."
"... "As you walk in, to the right you see our granite countertops with two sinks, one for each of the residents," she says. A partial wall separates the beds. Rather than trek down the hall to shower, they share a bathroom with the room next door. "That's really nice compared to community bathrooms that I lived in last year," Conley says. To be fair, granite countertops last longer. Tempur-Pedic is a local company - and gave a big discount. The amenities include classrooms and study space that are part of the dorm. Many of the residents are in the university's Honors program. But do student really need Apple TV in the lounges, or a smartphone app that lets them check their laundry status from afar? "Demand has been very high," says the university's Penny Cox, who is overseeing the construction of several new residence halls on campus. Before Central Hall's debut in August, the average dorm was almost half a century old, she says. That made it harder to recruit. " If you visit places like Ohio State, Michigan, Alabama," Cox says, "and you compare what we had with what they have available to offer, we were very far behind." Today colleges are competing for a more discerning consumer. Students grew up with fewer siblings, in larger homes, Cox says. They expect more privacy than previous generations - and more comforts. "These days we seem to be bringing kids up to expect a lot of material plenty," says Jean Twenge, a psychology professor at San Diego State University and author of the book "Generation Me." Those students could be in for some disappointment when they graduate , she says. "When some of these students have all these luxuries and then they get an entry-level job and they can't afford the enormous flat screen and the granite countertops," Twenge says, "then that's going to be a rude awakening." Some on campus also worry about the divide between students who can afford such luxuries and those who can't. The so-called premium dorms cost about $1,000 more per semester. Freshman Josh Johnson, who grew up in a low-income family and lives in one of the university's 1960s-era buildings, says the traditional dorm is good enough for him. ..."
"... "I wouldn't pay more just to live in a luxury dorm," he says. "It seems like I could just pay the flat rate and get the dorm I'm in. It's perfectly fine." In the near future students who want to live on campus won't have a choice. Eventually the university plans to upgrade all of its residence halls. ..."
"... Competition for students who have more sophisticated tastes than in past years is creating the perfect environment for schools to try to outdo each other with ever-more posh on-campus housing. Keeping up in the luxury dorm race is increasingly critical to a school's bottom line: A 2006 study published by the Association of Higher Education Facilities Officers found that "poorly maintained or inadequate residential facilities" was the number-one reason students rejected enrolling at institutions. PHOTO GALLERY: Click Here to See the 10 Schools with Luxury Dorms ..."
"... Private universities get most of the mentions on lists of schools with great dorms, as recent ratings by the Princeton Review, College Prowler, and Campus Splash make clear. But a few state schools that have invested in brand-new facilities are starting to show up on those reviews, too. ..."
"... While many schools offer first dibs on the nicest digs to upperclassmen on campus, as the war for student dollars ratchets up even first-year students at public colleges are living in style. Here are 10 on-campus dormitories at state schools that offer students resort-like amenities. ..."
"... Perhaps some students are afraid to protest for fear of being photographed or videographed and having their face and identity given to every prospective employer throughout America. Perhaps those students are afraid of being blackballed throughout the Great American Workplace if they are caught protesting anything on camera. ..."
"... Mao was perfectly content to promote technical education in the new China. What he deprecated (and fought to suppress) was the typical liberal arts notion of critical thinking. We're witnessing something comparable in the U.S. We're witnessing something comparable in the U.S. ..."
"... Many of the best students feel enormous pressure to succeed and have some inkling that their job prospects are growing narrower, but they almost universally accept this as the natural order of things. Their outlook: if there are 10 or 100 applicants for every available job, well, by golly, I just have to work that much harder and be the exceptional one who gets the job. ..."
"... I read things like this and think about Louis Althusser and his ideas about "Ideological State Apparatuses." While in liberal ideology the education is usually considered to be the space where opportunity to improve one's situation is founded, Althusser reached the complete opposite conclusion. For him, universities are the definitive bourgeois institution, the ideological state apparatus of the modern capitalist state par excellance . The real purpose of the university was not to level the playing field of opportunity but to preserve the advantages of the bourgeoisie and their children, allowing the class system to perpetuate/reproduce itself. ..."
"... My nephew asked me to help him with his college introductory courses in macroeconomics and accounting. I was disappointed to find out what was going on: no lectures by professors, no discussion sessions with teaching assistants; no team projects–just two automated correspondence courses, with automated computer graded problem sets objective tests – either multiple choice, fill in the blank with a number, or fill in the blank with a form answer. This from a public university that is charging tuition for attendance just as though it were really teaching something. All they're really certifying is that the student can perform exercises is correctly reporting what a couple of textbooks said about subjects of marginal relevance to his degree. My nephew understands exactly that this is going on, but still . ..."
"... The reason students accept this has to be the absolutely demobilized political culture of the United States combined with what college represents structurally to students from the middle classes: the only possibility – however remote – of achieving any kind of middle class income. ..."
"... Straight bullshit, but remember our school was just following the national (Neoliberal) model. ..."
Yves here. In May, we wrote up and embedded the report on how NYU exploits students and adjuncts in
"The Art of the Gouge": NYU as a Model for Predatory Higher Education. This article below uses that study as a point of departure
for for its discussion of how higher education has become extractive.
By David Masciotra, the author of Mellencamp: American Troubadour (University Press of Kentucky). He has also written
for Salon, the Atlantic and the Los Angeles Review of Books. For more information visit www.davidmasciotra.com. Originally published
at Alternet
Higher education wears the cloak of liberalism, but in policy and practice, it can be a corrupt and cutthroat system of power
and exploitation. It benefits immensely from right-wing McCarthy wannabes, who in an effort to restrict academic freedom and silence
political dissent, depict universities as left-wing indoctrination centers.
But the reality is that while college administrators might affix "down with the man" stickers on their office doors, many prop
up a system that is severely unfair to American students and professors, a shocking number of whom struggle to make ends meet. Even
the most elementary level of political science instructs that politics is about power. Power, in America, is about money: who has
it? Who does not have it? Who is accumulating it? Who is losing it? Where is it going?
Four hundred faculty members at New York University, one of the nation's most expensive schools, recently released a report on
how their own place of employment, legally a nonprofit institution, has become a predatory business, hardly any different in ethical
practice or economic procedure than a sleazy storefront payday loan operator. Its title succinctly summarizes the new intellectual
discipline deans and regents have learned to master: "The
Art of The Gouge."
The result of their investigation reads as if Charles Dickens and Franz Kafka collaborated on notes for a novel. Administrators
not only continue to raise tuition at staggering rates, but they burden their students with inexplicable fees, high cost burdens
and expensive requirements like mandatory study abroad programs. When students question the basis of their charges, much of them
hidden during the enrollment and registration phases, they find themselves lost in a tornadic swirl of forms, automated answering
services and other bureaucratic debris.
Often the additional fees add up to thousands of dollars, and that comes on top of the already hefty tuition, currently $46,000
per academic year, which is more than double its rate of 2001. Tuition at NYU is higher than most colleges, but a bachelor's degree,
nearly anywhere else, still comes with a punitive price tag. According to the College Board, the average cost of tuition and fees
for the 2014–2015 school year was $31,231 at private colleges, $9,139 for state residents at public colleges, and $22,958 for out-of-state
residents attending public universities.
Robert Reich, in his book Supercapitalism, explains that in the past 30 years the two industries with the most excessive increases
in prices are health care and higher education. Lack of affordable health care is a crime, Reich argues, but at least new medicines,
medical technologies, surgeries, surgery techs, and specialists can partially account for inflation. Higher education can claim no
costly infrastructural or operational developments to defend its sophisticated swindle of American families. It is a high-tech, multifaceted,
but old fashioned transfer of wealth from the poor, working- and middle-classes to the rich.
Using student loan loot and tax subsidies backed by its $3.5 billion endowment, New York University has created a new administrative
class of aristocratic compensation. The school not only continues to hire more administrators – many of whom the professors indict
as having no visible value in improving the education for students bankrupting themselves to register for classes – but shamelessly
increases the salaries of the academic administrative class. The top 21 administrators earn a combined total of $23,590,794 per year.
The NYU portfolio includes many multi-million-dollar mansions and luxury condos, where deans and vice presidents live rent-free.
Meanwhile, NYU has spent billions, over the past 20 years, on largely unnecessary real estate projects, buying property and renovating
buildings throughout New York. The professors' analysis, NYU's US News and World Report Ranking, and student reviews demonstrate
that few of these extravagant projects, aimed mostly at pleasing wealthy donors, attracting media attention, and giving administrators
opulent quarters, had any impact on overall educational quality.
As the managerial class grows, in size and salary, so does the full time faculty registry shrink. Use of part time instructors
has soared to stratospheric heights at NYU. Adjunct instructors, despite having a minimum of a master's degree and often having a
Ph.D., receive only miserly pay-per-course compensation for their work, and do not receive benefits. Many part-time college instructors
must transform their lives into daily marathons, running from one school to the next, barely able to breathe between commutes and
courses. Adjunct pay varies from school to school, but the average rate is $2,900 per course.
Many schools offer rates far below the average, most especially community colleges paying only $1,000 to $1,500. Even at the best
paying schools, adjuncts, as part time employees, are rarely eligible for health insurance and other benefits. Many universities
place strict limits on how many courses an instructor can teach. According to a recent study, 25 percent of adjuncts
receive government assistance.
The actual scandal of "The Art of the Gouge" is that even if NYU is a particularly egregious offender of basic decency and honesty,
most of the report's indictments could apply equally to nearly any American university. From 2003-2013, college tuition increased
by a crushing
80 percent. That far outpaces all other inflation. The closest competitor was the cost of medical care, which in the same time
period, increased by a rate of 49 percent. On average, tuition in America rises eight percent on an annual basis, placing it far
outside the moral universe. Most European universities
charge only marginal fees for attendance, and many of them are free. Senator Bernie Sanders recently introduced a bill proposing
all public universities offer free education. It received little political support, and almost no media coverage.
In order to obtain an education, students accept the paralytic weight of student debt, the only form of debt not dischargeable
in bankruptcy. Before a young person can even think about buying a car, house or starting a family, she leaves college with thousands
of dollars in debt: an average of $29,400 in 2012. As colleges continue to suck their students dry of every dime, the US government
profits at $41.3 billion per year by
collecting interest
on that debt. Congress recently cut funding for Pell Grants, yet increased the budget for hiring debt collectors to target delinquent
student borrowers.
The university, once an incubator of ideas and entrance into opportunity, has mutated into a tabletop model of America's economic
architecture, where the top one percent of income earners now owns 40 percent of the wealth.
"The One Percent at State U," an Institute for Policy Studies report, found that at the 25 public universities with the highest
paid presidents, student debt and adjunct faculty increased at dramatically higher rates than at the average state university. Marjorie
Wood, the study's co-author, explained told the New York Times that extravagant executive pay is the "tip of a very large
iceberg, with universities that have top-heavy executive spending also having more adjuncts, more tuition increases and more administrative
spending.
Unfortunately, students seem like passive participants in their own liquidation. An American student protest timeline for 2014-'15,
compiled by historian Angus Johnston, reveals that most demonstrations and rallies focused on police violence, and sexism. Those
issues should inspire vigilance and activism, but only 10 out of 160 protests targeted tuition hikes for attack, and only two of
those 10 events took place
outside the state
of California.
Class consciousness and solidarity actually exist in Chile, where in 2011 a student movement began to organize, making demands
for free college. More than mere theater, high school and college students, along with many of their parental allies, engaged the
political system and made specific demands for inexpensive education. The Chilean government announced that in March 2016, it will
eliminate all tuition from public universities. Chile's victory for participatory democracy, equality of opportunity and social justice
should instruct and inspire Americans. Triumph over extortion and embezzlement is possible.
This seems unlikely to happen in a culture, however, where even most poor Americans view themselves, in the words of John Steinbeck,
as "temporarily embarrassed millionaires." The political, educational and economic ruling class of America is comfortable selling
out its progeny. In the words of one student quoted in "The Art of the Gouge," "they see me as nothing more than $200,000."
At a basic level, I think the answer is yes, because on balance, college still provides a lot of privatized value to the individual.
Being an exploited student with the College Credential Seal of Approval remains relatively much better than being an exploited
non student lacking that all important seal. A college degree, for example, is practically a guarantee of avoiding the
more unseemly parts of the US "justice" system.
But I think this is changing. The pressure is building from the bottom as academia loses credibility as an institution capable
of, never mind interested in, serving the public good rather than simply being another profit center for connected workers. It's
actually a pretty exciting time. The kiddos are getting pretty fed up, and the authoritarians at the top of the hierarchy are
running out of money with which to buy off younger technocratic enablers and thought leaders and other Serious People.
P.S., the author in this post demonstrates the very answer to the question. He assumes as true, without any need for support,
that the very act of possessing a college degree makes one worthy of a better place in society. That mindset is why colleges can
prey upon students. They hold a monopoly on access to resources in American society. My bold:
Adjunct instructors, despite having a minimum of a master's degree and often having a Ph.D., receive only
miserly pay-per-course compensation for their work, and do not receive benefits.
What does having a masters degree or PhD have to do with the moral claim of all human beings to a life of dignity and purpose?
There are so many more job seekers per job opening now than, say, 20 or thirty years ago that a degree is used to sort out
applications. Now a job that formerly listed a high school degree as a requirement may now list a college degree as a requirement,
just to cut down on the number of applications.
So, no, a B.A. or B.S. doesn't confer moral worth, but it does open more job doors than a high school diploma, even if the
actual work only requires high school level math, reading, science or technology.
I agree a phd often makes someone no more useful in society. However the behaviour of the kids is rational *because* employers
demand a masters / phd.
Students are then caught in a trap. Employers demand the paper, often from an expensive institution. The credit is abundant
thanks to govt backed loans. They are caught in a situation where as a collective it makes no sense to join in, but as an individual
if they opt out they get hurt also.
Same deal for housing. It's a mad world my masters.
What can we do about this? The weak link in the chain seems to me to be employers. Why are they hurting themselves by selecting
people who want higher pay but may offer little to no extra value? I work as a programmer and I often think " if we could just
'see' the non-graduate diamonds in the rough".
If employers had perfect knowledge of prospective employees *and* if they saw that a degree would make no difference to their
performance universities would crumble overnight.
The state will never stop printing money via student loans. If we can fix recruitment then universities are dead.
Why are they hurting themselves by selecting people who want higher pay but may offer little to no extra value?
Yeah, I have thought a lot about that particular question of organizational behavior. It does make sense, conceptually, that
somebody would disrupt the system and take people based on ability rather than credentials. Yet we are moving in the opposite
direction, toward more rigidity in educational requirements for employment.
For my two cents, I think the bulk of the answer lies in how hiring specifically, and management philosophy more generally,
works in practice. The people who make decisions are themselves also subject to someone else's decisions. This is true all up
and down the hierarchical ladder, from board members and senior executives to the most junior managers and professionals.
It's true that someone without a degree may offer the same (or better) performance to the company. But they do not offer the
same performance to the people making decisions, because those individual people also depend upon their own college degrees
to sell their own labor services. To hire significant numbers of employees without degrees into important roles is to sabotage
their own personal value.
Very few people are willing to be that kind of martyr. And generally speaking, they tend to self-select away from occupations
where they can meaningfully influence decision-making processes in large organizations.
Absolutely, individual business owners can call BS on the whole scam. It is a way that individual people can take action against
systemic oppression. Hire workers based upon their fit for the job, not their educational credentials or criminal background or
skin color or sexual orientation or all of the other tests we have used. But that's not a systemic solution because the incentives
created by public policy are overwhelming at large organizations to restrict who is 'qualified' to fill the good jobs (and increasingly,
even the crappy jobs).
I am not so sure that this is so. So many jobs are now crapified. When I was made redundant in 2009, I could not find many
jobs that fit my level of experience (just experience! I have no college degree), so I applied for anything that fit my skill
set, pretty much regardless of level. I was called Overqualified. I have heard that in the past as well, but never more so during
that stretch of job hunting. Remember that's with no degree. Maybe younger people don't hear it as much. But I also think life
experience has something to do with it, you need to have something to compare it to. How many times did our parents tell us how
different things were when they were kids, how much easier? I didn't take that on board, did y'all?
For various reasons, people seeking work these days, especially younger job applicants, might not possess the habits of mind
and behavior that would make them good employees – i.e., punctuality, the willingness to come to work every day (even when something
more fun or interesting comes up, or when one has partied hard the night before), the ability to meet deadlines rather than make
excuses for not meeting them, the ability to write competently at a basic level, the ability to read instructions, diagrams, charts,
or any other sort of necessary background material, the ability to handle basic computation, the ability to FOLLOW instructions
rather than deciding that one will pick and choose which rules and instructions to follow and which to ignore, trainability, etc.
Even if a job applicant's degree is in a totally unrelated field, the fact that he or she has managed to complete an undergraduate
degree–or, if relevant, a master's or a doctorate – is often accepted by employers as a sign that the applicant has a sense of
personal responsibility, a certain amount of diligence and educability, and a certain level of basic competence in reading, writing,
and math.
By the same token, employers often assume that an applicant who didn't bother going to college or who couldn't complete a college
degree program is probably not someone to be counted on to be a responsible, trainable, competent employee.
Obviously those who don't go to college, or who go but drop out or flunk out, end up disadvantaged when competing for jobs,
which might not be fair at all in individual cases, especially now that college has been priced so far out of the range of so
many bright, diligent students from among the poor and and working classes, and now even those from the middle class.
Nevertheless, in general an individual's ability to complete a college degree is not an unreasonable stand-in as evidence of
that person's suitability for employment.
Students are first caught in a trap of "credentials inflation" needed to obtain jobs, then caught by inflation in education
costs, then stuck with undischargeable debt. And the more of them who get the credentials, the worse the credentials inflation–a
spiral.
It's all fuelled by loose credit. The only beneficiaries are a managerial elite who enjoy palatial facilities.
As for the employers, they're not so bad off. Wages are coming down for credentialled employees due to all the competition.
There is such a huge stock of degreed applicants that they can afford to ignore anyone who isn't. The credentials don't cost the
employer–they're not spending the money, nor are they lending the money.
Modern money makes it possible for the central authorities to keep this racket going all the way up to the point of general
systemic collapse. Why should they stop? Who's going to make them stop?
The only reason the universities can get away with it is easy money. When the time comes that students actually need to pay
tuition with real money, money they or their parents have actually saved, then college tuition rates will crash back down to earth.
Don't blame the universities. This is the natural and inevitable outcome of easy money.
Yes, college education in the US is a classic example of the effects of subsidies. Eliminate the subsidies and the whole education
bubble would rapidly implode.
I'm very curious if anyone will disagree with that assessment.
An obvious commonality across higher education, healthcare, housing, criminal justice, and national security is that we spend
huge quantities of public money yet hold the workers receiving that money to extremely low standards of accountability for what
they do with it.
Correct, it's not the universities, it's the culture that contains the universities, but the universities are training grounds
for the culture so it is the universities just not only the universities Been remembering the song from my college days "my futures
so bright i gotta wear shades". getting rich was the end in itself, and people who didn't make it didn't deserve anything but
a whole lot of student debt,creating perverse incentives. And now we all know what the A in type a stands for at least among those
who self identify as such, so yes it is the universities
I don't understand why the ability to accept guaranteed loan money doesn't come with an obligation by the school to cap tuition
at a certain percentage over maximum loan amount? Would that be so hard to institute?
Student loans are debt issuance. Western states are desperate to issue debt as it's fungible with money and marked down as
growth.
Borrow 120K over 3 years and it all gets paid into university coffers and reappears as "profit" now. Let some other president
deal with low disposable income due to loan repayments. It's in a different electoral cycle – perfect.
You can try to argue, but it will be hard to refute. If you give mortgages at teaser rates to anybody who can fog a mirror,
you get a housing bubble. If you give student loans to any student without regard to the prospects of that student paying back
the loan, you get a higher education bubble. Which will include private equity trying to catch as much of this money as they possibly
can by investing in for profit educational institutions just barely adequate to benefit from federal student loan funds.
A lot of background conditions help. It helps to pump a housing bubble if there's nothing else worth investing in (including
saving money at zero interest rates). It helps pump an education bubble if most of the jobs have been outsourced so people are
competing more and more for fewer and fewer.
I don't disagree with the statement that easy money has played the biggest role in jacking up tuition. I do strongly disagree
that we shouldn't "blame" the universities. The universities are exactly where we should place the blame. The universities have
become job training grounds, and yet continue to droll on and on about the importance of noble things like liberal education,
the pursuit of knowledge, the importance of ideas, etc. They cannot have it both ways. Years ago, when tuition rates started escalating
faster than inflation, the universities should have been the loudest critics – pointing out the cultural problems that would accompany
sending the next generation into the future deeply indebted – namely that all the noble ideas learned at the university would
get thrown out the window when financial reality forced recent graduates to chose between noble ideas and survival. If universities
truly believed that a liberal education was important; that the pursuit of knowledge benefitted humanity – they should have led
the charge to hold down tuition.
I took it to mean blame as in what allows the system to function. I heartily agree that highly paid workers at universities
bear blame for what they do (and don't do) at a granular level.
It's just that they couldn't do those things without the system handing them gobs of resources, from tax deductability of charitable
contributions to ignoring anti-competitive behavior in local real estate ownership to research grants and other direct funding
to student loans and other indirect funding.
Regarding blaming "highly paid workers at universities" – If a society creates incentives for dysfunctional behavior such a
society will have a lot of dysfunction. Eliminate the subsidies and see how quicly the educational bubble pops.
You are ignoring the way that the rich bid up the cost of everything. 2% of the population will pay whatever the top dozen
or so schools will charge so that little Billy or Sue can go to Harvard or Stanford. This leads to cost creep as the next tier
ratchet up their prices in lock step with those above them, etc. The same dynamic happens with housing, at least around wealthy
metropolitan areas.
A European perspective on this: yep, that's true on an international perspective. I belong to the ugly list of those readers
of this blog who do not fully share the liberal values of most of you hear. However, may I say that I can agree on a lot of stuff.
US education and health-care are outrageously costly. Every European citizen moving to the states has a question: will he or
she be sick whilst there. Every European parent with kids in higher education is aware that having their kids for one closing
year in the US is the more they can afford (except if are a banquier d'affaires ). Is the value of the US education good? No doubt!
Is is good value for money, of course not. Is the return on the money ok? It will prove disastrous, except if the USD crashed.
The main reason? Easy money. As for any kind of investment. Remember that this is indeed a investment plan
Check the level of revenues of "public sector" teaching staff on both sides of the ponds. The figure for US professionals in
these area are available on the Web. They are indeed much more costly than, say, North-of-Europe counterparts, "public sector"
professionals in those area. Is higher education in the Netherlands sub-par when compared to the US? Of course not.
Yep financing education via the Fed (directly or not) is not only insanely costly. Just insane. The only decent solution: set
up public institutions staffed with service-minded professionals that did not have to pay an insane sum to build up the curriculum
themselves.
Are "public services" less efficient than private ones here in those area, health-care and higher education. Yep, most certainly.
But, sure, having the fed indirectly finance the educational system just destroy any competitive savings made in building a competitive
market-orientated educational system and is one of the worst way to handle your educational system.
Yep, you can do a worst use of the money, subprime or China buildings But that's all about it.
US should forget about exceptionnalism and pay attention to what North of Europe is doing in this area. Mind you, I am Southerner
(of Europe). But of course I understand that trying to run these services on a federal basis is indeed "mission impossible".
Way to big! Hence the indirect Washington-decided Wall-Street-intermediated Fed-and-deficit-driven financing of higher education.
Mind you: we have more and more of this bankers meddling in education in Europe and I do not like what I see.
@washunate – 6/26/15, 11:03 am. I know I'm late to the party, but I disagree. It's not the workers, it's the executives
and management generally. Just like Wall Street, many of these top administrators have perfected the art of failing upwards.
IMNSHO everyone needs to stop blaming labor and/or the labor unions. It's not the front line workers, teachers, retail clerks,
adjunct instructors, all those people who do the actual work rather than managing other people. Those workers have no bargaining
power, and the unions have lost most of theirs, in part due to the horrible labor market, as well as other important reasons.
We have demonized virtually all of the government workers who actually do the work that enables us to even have a government
(all levels) and to provide the services we demand, such as public safety, education, and infrastructure. These people are our
neighbors, relatives and friends; we owe them better than this.
Unionized support staff at Canadian universities have had sub-inflation wage increases for nearly 20 years, while tuition has
been rising at triple the rate of inflation.
So obviously one can't blame the unions for rising education costs.
Omitted from this account: Federal funding for education has declined 55% since 1972. Part of the Powell memo's agenda.
It's understandable too; one can hardly blame legislators for punishing the educational establishment given the protests of
the '60s and early '70s After all, they were one reason Nixon and Reagan rose to power. How dare they propose real democracy!
Harumph!
To add to students' burden, there's the recent revision of bankruptcy law: student loans can no longer be retired by bankruptcy
(Thanks Hillary!) It'll be interesting to see whether Hillary's vote on that bankruptcy revision becomes a campaign issue.
I also wonder whether employers will start to look for people without degrees as an indication they were intelligent enough
to sidestep this extractive scam.
I'd be curious what you count as federal funding. Pell grants, for example, have expanded both in terms of the number of recipients
and the amount of spending over the past 3 – 4 decades.
More generally, federal support for higher ed comes in a variety of forms. The bankruptcy law you mention is itself a form
of federal funding. Tax exemption is another. Tax deductabiliity of contributions is another. So are research grants and exemptions
from anti-competitive laws and so forth. There are a range of individual tax credits and deductions. The federal government also
does not intervene in a lot of state supports, such as licensing practices in law and medicine that make higher ed gatekeepers
to various fiefdoms and allowing universities to take fees for administering (sponsoring) charter schools. The Federal Work-Study
program is probably one of the clearest specific examples of a program that offers both largely meaningless busy work and terrible
wages.
As far as large employers seeking intelligence, I'm not sure that's an issue in the US? Generally speaking, the point of putting
a college credential in a job requirement is precisely to find people participating in the 'scam'. If an employer is genuinely
looking for intelligence, they don't have minimum educational requirements.
Why would tuition rates come down when students need to pay with "real money, money they or their parents have actually saved.
. . " ? Didn't tuition at state universities begin climbing when state governments began boycotting state universities in terms
of embargoing former rates of taxpayer support to them? Leaving the state universities to try making up the difference by raising
tuition? If people want to limit or reduce the tuition charged to in-state students of state universities, people will have to
resume paying former rates of taxes and elect people to state government to re-target those taxes back to state universities the
way they used to do before the reductions in state support to state universities.
Protest against exploitation and risk being black-listed by exploitative employers -> Only employers left are the ones who
actually do want (not pretend to want) ethical people willing to stand up for what they believe in. Not many of those kind of
employers around . What is the benefit? What are the risks?
The author misrepresents the nature and demands of Chile's student movement.
Over the past few decades, university enrollment rates for Chileans expanded dramatically in part due to the creation of many
private universities. In Chile, public universities lead the pack in terms of academic reputation and entrance is determined via
competitive exams. As a result, students from poorer households who attended low-quality secondary schools generally need to look
at private universities to get a degree. And these are the students to which the newly created colleges catered to.
According to Chilean legislation, universities can only function as non-profit entities. However, many of these new institutions
were only nominally non-profit entities (for example, the owners of the university would also set up a real estate company that
would rent the facilities to the college at above market prices) and they were very much lacking in quality. After a series of
high-profile cases of universities that were open and shut within a few years leaving its students in limbo and debt, anger mounted
over for-profit education.
The widespread support of the student movement was due to generalized anger about and education system that is dearly lacking
in quality and to the violation of the spirit of the law regulating education. Once the student movement's demands became more
specific and morphed from opposing for profit institutions to demanding free tuition for everyone, the widespread support waned
quickly.
And while the government announced free tuition in public universities, there is a widespread consensus that this is a pretty
terrible idea as it is regressive and involves large fiscal costs. In particular because most of the students that attend public
universities come from relatively wealthy households that can afford tuition. The students that need the tuition assistance will
not benefit under the new rules.
I personally benefited from the fantastically generous financial aid systems that some private American universities have set
up which award grants and scholarships based on financial need only. And I believe that it is desirable for the State to guarantee
that any qualified student has access to college regardless of his or her wealth I think that by romanticizing the Chilean student
movement the author reveals himself to be either is dishonest or, at best, ignorant.
Students aren't protesting because they don't feel the consequences until they graduate.
One thing that struck me when I applied for a student loan a few years back to help me get through my last year of graduate
school – the living expense allocation was surprisingly high. Not "student sharing an apartment with five random dudes while eating
ramen and riding the bus", but more "living alone in a nice one-bedroom apartment while eating takeout and driving a car". Apocryphal
stories of students using their student loans to buy new cars or take extravagant vacations were not impossible to believe.
The living expense portion of student loans is often so generous that students can live relatively well while going to school,
which makes it that much easier for them to push to the backs of their minds the consequences that will come from so much debt
when they graduate. Consequently, it isn't the students who are complaining – it's the former students. But by the time
they are out of school and the university has their money in its pocket, it's too late for them to try and change the system.
Sophomore Noell Conley lives there, too. She shows off the hotel-like room she shares with a roommate.
"As you walk in, to the right you see our granite countertops with two sinks, one for each of the residents," she says.
A partial wall separates the beds. Rather than trek down the hall to shower, they share a bathroom with the room next door.
"That's really nice compared to community bathrooms that I lived in last year," Conley says.
To be fair, granite countertops last longer. Tempur-Pedic is a local company - and gave a big discount. The amenities include
classrooms and study space that are part of the dorm. Many of the residents are in the university's Honors program. But do student
really need Apple TV in the lounges, or a smartphone app that lets them check their laundry status from afar?
"Demand has been very high," says the university's Penny Cox, who is overseeing the construction of several new residence halls
on campus. Before Central Hall's debut in August, the average dorm was almost half a century old, she says. That made it harder
to recruit.
"If you visit places like Ohio State, Michigan, Alabama," Cox says, "and you compare what we had with what they have
available to offer, we were very far behind."
Today colleges are competing for a more discerning consumer. Students grew up with fewer siblings, in larger homes, Cox says.
They expect more privacy than previous generations - and more comforts.
"These days we seem to be bringing kids up to expect a lot of material plenty," says Jean Twenge, a psychology professor at
San Diego State University and author of the book "Generation Me."
Those students could be in for some disappointment when they graduate, she says.
"When some of these students have all these luxuries and then they get an entry-level job and they can't afford the enormous
flat screen and the granite countertops," Twenge says, "then that's going to be a rude awakening."
Some on campus also worry about the divide between students who can afford such luxuries and those who can't. The so-called
premium dorms cost about $1,000 more per semester. Freshman Josh Johnson, who grew up in a low-income family and lives
in one of the university's 1960s-era buildings, says the traditional dorm is good enough for him.
"I wouldn't pay more just to live in a luxury dorm," he says. "It seems like I could just pay the flat rate and
get the dorm I'm in. It's perfectly fine."
In the near future students who want to live on campus won't have a choice. Eventually the university plans to upgrade all of
its residence halls.
So I wonder who on average will fair better navigating the post-college lifestyle/job market reality check, Noell or Josh?
Personally, I would bet on the Joshes living in the 60's vintage enamel painted ciderblock dorm rooms.
Competition for students who have more sophisticated tastes than in past years is creating the perfect environment for
schools to try to outdo each other with ever-more posh on-campus housing. Keeping up in the luxury dorm race is increasingly critical
to a school's bottom line: A 2006 study published by the Association of Higher Education Facilities Officers found that
"poorly maintained or inadequate residential facilities" was the number-one reason students rejected enrolling at institutions.
PHOTO GALLERY: Click Here to See the 10 Schools with Luxury Dorms
Private universities get most of the mentions on lists of schools with great dorms, as recent ratings by the Princeton Review,
College Prowler, and Campus Splash make clear. But a few state schools that have invested in brand-new facilities are starting
to show up on those reviews, too.
While many schools offer first dibs on the nicest digs to upperclassmen on campus, as the war for student dollars ratchets
up even first-year students at public colleges are living in style. Here are 10 on-campus dormitories at state schools that offer
students resort-like amenities.
Bingo! They don't get really mad until they're in their early thirties and they are still stuck doing some menial job with
no vacation time, no health insurance and a monstrous mountain of debt. Up until that point they're still working hard waiting
for their ship to come in and blaming themselves for any lack of success like Steinbeck's 'embarrassed millionaires.' Then one
day maybe a decade after they graduate they realize they've been conned but they've got bills to pay and other problems to worry
about so they solider on. 18 year-olds are told by their high school guidance councilors, their parents and all of the adults
they trust that college while expensive is a good investment and the only way to succeed. Why should they argue? They don't know
any better yet.
Perhaps some students are afraid to protest for fear of being photographed or videographed and having their face and identity
given to every prospective employer throughout America. Perhaps those students are afraid of being blackballed throughout the
Great American Workplace if they are caught protesting anything on camera.
Today isn't like the sixties when you could drop out in the confidence that you could always drop back in again. Nowadays there
are ten limpets for every scar on the rock.
the average is such a worthless number. The Data we need, and which all these parasitic professional managerial types won't
provide –
x axis would be family income, by $5000 increments.
y axis would be the median debt level
we could get fancy, and also throw in how many kids are in school in each of those income increments.
BTW – this 55 yr. old troglodyte believes that 1 of the roles (note – I did NOT say "The Role") of education is preparing people
to useful to society. 300++ million Americans, 7 billion humans – we ALL need shelter, reliable and safe food, reliable and safe
water, sewage disposal, clothing, transportation, education, sick care, power, leisure, we should ALL have access to family wage
jobs and time for BBQs with our various communities several times a year. I know plenty of techno-dweebs here in Seattle who need
to learn some of the lessons of 1984, The Prince, and Shakespeare. I know plenty of fuzzies who could be a bit more useful with
some rudimentary skills in engineering, or accounting, or finance, or stats, or bio, or chem
I don't know what the current education system is providing, other than some accidental good things for society at large, and
mainly mechanisms for the para$ite cla$$e$ to stay parasites.
Mao was perfectly content to promote technical education in the new China. What he deprecated (and fought to suppress) was
the typical liberal arts notion of critical thinking. We're witnessing something comparable in the U.S.
This suppression in China led to an increase in Mao's authority (obviously), but kept him delusional. For example, because
China relied on Mao's agricultural advice, an estimated 70 million Chinese died during peacetime. But who else was to be relied
upon as an authority?
Back the the U.S.S.A. (the United StateS of America): One Australian says of the American system: "You Yanks don't consult
the wisdom of democracy; you enable mobs."
Mao was perfectly content to promote technical education in the new China. What he deprecated (and fought to suppress)
was the typical liberal arts notion of critical thinking. We're witnessing something comparable in the U.S. We're witnessing
something comparable in the U.S.
Mao liked chaos because he believed in continuous revolution. I would argue what we're experiencing is nothing comparable to
what China experienced. (I hope I've understood you correctly.)
I am pretty sure a tradition of protest to affect political change in the US is a rather rare bird. Most people "protest" by
changing their behavior. As an example, by questioning the value of the 46,000 local private college tuition as opposed the the
15k and 9k tiered state college options. My daughter is entering the freshman class next year, we opted for the cheaper state
option because, in the end, a private school degree adds nothing, unless it is to a high name recognition institution.
I think, like housing, a downstream consequence of "the gouge" is not to question - much less understand - class relations,
but to assess the value of the lifetyle choice once you are stuck with the price of paying for that lifestyle in the form of inflated
debt repayments. Eventually "the folk" figure it out and encourage cheaper alternatives toward the same goal.
There's probably little point in engaging in political protest. Most people maximise their chances of success by focusing on
variables over which they have some degree of control. The ability of most people to have much effect on the overall political-economic
system is slight and any returns from political activity are highly uncertain.
How does anyone even expect to maintain cheap available state options without political activity? By wishful thinking I suppose?
The value of a private school might be graduating sooner, state schools are pretty overcrowded, but that may not at all be
worth the debt (I doubt it almost ever is on a purely economic basis).
Maybe if we just elect the right people with cool posters and a hopey changey slogan, they'll take care of everything for us
and we won't have to be politically active.
Of course refusal to engage politically because the returns to oneself by doing so are small really IS the tragedy of the commons.
Thus one might say it's ethical to engage politically in order to avoid it. Some ethical action focuses on overcoming tragedy
of the commons dilemmas. Of course the U.S. system being what it is I have a hard time blaming anyone for giving up.
The middle class, working class and poor have no voice in politics or policy at all, and they don't know what's going on until
it's too late. They've been pushed by all their high school staff that college is the only acceptable option - and often it is.
What else are they going to do out of high school, work a 30 hour a week minimum wage retail job? The upper middle class and rich,
who entirely monopolize the media, don't have any reason to care about skyrocketing college tuition - their parents are paying
for it anyway. They'd rather write about the hip and trendy issues of the day, like trigger warnings.
Speaking as one of these college students, I think that a large part of the reason that the vast majority of students are just
accepting the tuition rates is because it has become the societal norm. Growing up I can remember people saying "You need to go
to college to find a good job." Because a higher education is seen as a necessity for most people, students think of tuition as
just another form of taxes, acceptable and inevitable, which we will expect to get a refund on later in life.
I teach at a "good" private university. Most of my students don't have a clue as to how they're being exploited. Many of
the best students feel enormous pressure to succeed and have some inkling that their job prospects are growing narrower, but they
almost universally accept this as the natural order of things. Their outlook: if there are 10 or 100 applicants for every available
job, well, by golly, I just have to work that much harder and be the exceptional one who gets the job.
Incoming freshmen were born in the late 90s - they've never known anything but widespread corruption, financial and corporate
oligarchy, i-Pads and the Long Recession.
But as other posters note, the moment of realization usually comes after four years of prolonged adolescence, luxury dorm living
and excessive debt accumulation.
Most Ph.D.'s don't either. I'd argue there have been times they have attempted to debate that exploitation is a good–for their
employer and himself/herself–with linguistic games. Mind numbing . To be fair, they have a job.
I have watched the tuition double–double!–at my alma mater in the last eleven years. During this period, administrators have
set a goal of increasing enrollment by a third, and from what I hear, they've done so. My question is always this: where is the
additional tuition money going? Because as I walk through the campus, I don't really see that many improvements–yes, a new building,
but that was supposedly paid for by donations and endowments. I don't see new offices for these high-priced admin people that
colleges are hiring, and in fact, what I do see is an increase in the number of part-time faculty and adjuncts. The tenured faculty
is not prospering from all this increased revenue, either.
I suspect the tuition is increasing so rapidly simply because the college can get away with it. And that means they are exploiting
the students.
While still a student, I once calculated that it cost me $27.00/hour to be in class. (15 weeks x 20 "contact hours" per week
=
300 hours/semester, $8000/semester divided by 300 hours = $27.00/hour). A crude calculation, certainly, but a starting point.
I did this because I had an instructor who was consistently late to class, and often cancelled class, so much that he wiped out
at least $300.00 worth of instruction. I had the gall to ask for a refund of that amount. I'm full of gall. Of course, I was laughed
at, not just by the administrators, but also by some students.
Just like medical care, education pricing is "soft," that is, the price is what you are willing to pay. Desirable students
get scholarships and stipends, which other students subsidize; similarly, some pre-ACA patients in hospitals were often treated
gratis.
Students AND hospital patients alike seem powerless to affect the contract with the provider. Reform will not likely be forthcoming,
as students, like patients, are "just passing through."
The tuition at most public universities has quadrupled or more over the last 15 to 20 years precisely BECAUSE state government
subsidies have been
slashed in the meantime. I was told around 2005 that quadrupled tuition at the University of Minnesota made up for about half
of the state money that the legislature had slashed from the university budget over the previous 15 years.
It is on top of that situation that university administrators are building themselves little aristocratic empires, very much
modeled on the kingdoms of corporate CEOs
where reducing expenses (cutting faculty) and services to customers (fewer classes, more adjuncts) is seen as the height of responsibility
and accountability, perhaps
even the definition of propriety.
Everyone should read the introductory chapter to David Graeber's " The Utopia of Rules: On Technology, Stupidity and the Secret
Joys of Bureaucracy."
In Chapter One of this book entitled "The Iron law of Liberalism and the Era of Total Bureaucratization" Graeber notes that
the US has become the most rigidly credentialised society in the world where
" in field after field from nurses to art teachers, physical therapists, to foreign policy consultants, careers which used
to be considered an art (best learned through doing) now require formal professional training and a certificate of completion."
Graeber, in that same chapter, makes another extremely important point. when he notes that career advancement in may large
bureaucratic organizations demands a willingness to play along with the fiction that advancement is based on merit, even though
most everyone know that this isn't true.
The structure of modern power in the U.S., in both the merging public and private sectors, is built around the false ideology
of a giant credentialized meritorcracy rather than the reality of arbitrary extraction by predatory bureaucratic networks.
Anecdote: I was speaking to someone who recently started working at as a law school administrator at my alma mater. Enrollment
is actually down at law schools (I believe), because word has spread about the lame legal job market. So, the school administration
is watching its pennies, and the new administrator says the administrators aren't getting to go on so many of the all expense
paid conferences and junkets that they used to back in the heyday. As I hear this, I am thinking about how many of these awesome
conferences in San Diego, New Orleans and New York that I'm paying back. Whatever happened to the metaphorical phrase: "when a
pig becomes a hog, it goes to slaughter"?
Another anecdote: I see my undergrad alma mater has demolished the Cold War era dorms on one part of campus and replaced it
with tons of slick new student housing.
No doubt those Cold War era dorms had outlived their planned life. Time for replacement. Hell, they had probably become inhabitable
and unsafe.
Meanwhile, has your undergraduate school replaced any of its lecture courses with courses presented same model as on-line traffic
school? I have a pending comment below about how my nephew's public university "taught" him introductory courses in accounting
and macroeconomics that way. Please be assured that the content of those courses was on a par with best practices in the on-line
traffic school industry. It would be hilarious if it weren't so desperately sad.
I read things like this and think about Louis Althusser and his ideas about "Ideological State Apparatuses." While in liberal
ideology the education is usually considered to be the space where opportunity to improve one's situation is founded, Althusser
reached the complete opposite conclusion. For him, universities are the definitive bourgeois institution, the ideological state
apparatus of the modern capitalist state par excellance. The real purpose of the university was not to level the playing
field of opportunity but to preserve the advantages of the bourgeoisie and their children, allowing the class system to perpetuate/reproduce
itself.
It certainly would explain a lot. It would explain why trying to send everyone to college won't solve this, because not everyone
can have a bourgeois job. Some people actually have to do the work. The whole point of the university as an institution was to
act as a sorting/distribution hub for human beings, placing them at certain points within the division of labor. A college degree
used to mean more because getting it was like a golden ticket, guaranteeing someone who got it at least a petit-bourgeois lifestyle.
The thing is, there are only so many slots in corporate America for this kind of employment. That number is getting smaller too.
You could hand every man, woman, and child in America a BS and it wouldn't change this in the slightest.
What has happened instead, for college to preserve its role as the sorting mechanism/preservation of class advantage is what
I like to call degree inflation and/or an elite formed within degrees themselves. Now a BS or BA isn't enough, one needs an Master's
or PhD to really be distinguished. Now a degree from just any institution won't do, it has to be an Ivy or a Tier 1 school. Until
we learn to think realistically about what higher education is as an institution little or nothing will change.
Any credential is worthless if everybody has it. All information depends on contrast. It's impossible for everybody to "stand
out" from the masses. The more people have college degrees the less value a college degree has.
When I was half-grown, I heard it said that religion is no longer the opiate of the masses, in that no one believes in God
anymore, at least not enough for it to change actual behavior.
Instead, buying on credit is the opiate of the masses.
My nephew asked me to help him with his college introductory courses in macroeconomics and accounting. I was disappointed
to find out what was going on: no lectures by professors, no discussion sessions with teaching assistants; no team projects–just
two automated correspondence courses, with automated computer graded problem sets objective tests – either multiple choice, fill
in the blank with a number, or fill in the blank with a form answer. This from a public university that is charging tuition for
attendance just as though it were really teaching something. All they're really certifying is that the student can perform exercises
is correctly reporting what a couple of textbooks said about subjects of marginal relevance to his degree. My nephew understands
exactly that this is going on, but still .
This is how 21st century America treats its young people: it takes people who are poor, in the sense that they have no assets,
and makes them poorer, loading them up with student debt, which they incur in order to finance a falsely-so-called course of university
study that can't be a good deal, even for the best students among them.
I am not suggesting the correspondence courses have no worth at all. But they do not have the worth that is being charged for
them in this bait-and-switch exercise by Ed Business.
After further thought, I'd compare my nephew's two courses to on-line traffic school: Mechanized "learning" – forget it all
as soon as the test is over – Critical thinking not required. Except for the kind of "test preparation" critical thinking that
teaches one to spot and eliminate the obviously wrong choices in objective answers–that kind of thinking saves time and so is
very helpful.
Not only is he paying full tuition to receive this treatment, but his family and mine are paying taxes to support it, too.
Very useful preparation for later life, where we can all expect to attend traffic school a few times. But no preparation for
any activity of conceivable use or benefit to any other person.
I read recently that the business establishment viewed the most important contribution of colleges was that they warehoused
young people for four years to allow maturing.
Where are the young people in all this? Is anyone going to start organizing to change things? Any ideas? Any interest? Are
we going to have some frustrated, emotional person attempt to kill a university president once every ten years? Then education
can appeal for support from the government to beef up security. Meanwhile the same old practices will prevail and the rich get
richer and the rest of us get screwed.
The reason students accept this has to be the absolutely demobilized political culture of the United States combined with
what college represents structurally to students from the middle classes: the only possibility – however remote – of achieving
any kind of middle class income.
Really your choices in the United States are, in terms of jobs, to go into the military (and this is really for working class
kids, Southern families with a military history and college-educated officer-class material) or to go to college.
The rest, who have no interest in the military, attend college, much like those who wanted to achieve despite of their class
background went into the priesthood in the medieval period. There hasn't been a revolt due to the lack of any idea it could function
differently and that American families are still somehow willing to pay the exorbitant rates to give their children a piece of
paper that still enables them to claim middle class status though fewer and fewer find jobs. $100k in debt seems preferable to
no job prospects at all.
Colleges have become a way for the ruling class to launder money into supposed non-profits and use endowments to purchase stocks,
bonds, and real estate. College administrators and their lackeys (the extended school bureaucracy) are propping up another part
of the financial sector – just take a look at Harvard's $30+ billion endowment, or Yale's $17 billion – these are just the top
of a very large heap. They're all deep into the financial sector. Professors and students are simply there as an excuse for the
alumni money machine and real estate scams to keep running, but there's less and less of a reason for them to employ professors,
and I say this as a PhD with ten years of teaching experience who has seen the market dry up even more than it was when I entered
grad school in the early 2000s.
"Colleges have become a way for the ruling class to launder money into supposed non-profits and use endowments to purchase
stocks, bonds, and real estate. "
Unorthodoxmarxist, I thought I was the only person who was coming to that conclusion. I think there's data out there that could
support our thesis that college tuition inflation may be affecting real estate prices. After all, justification a college grad
gave to someone who was questioning the value of a college degree was that by obtaining a "a degree" and a professional job, an
adult could afford to buy a home in major metropolitan hubs. I'm not sure if he was that ignorant, (business majors, despite the
math requirement are highly ideological people. They're no where near as objective as they like to portray themselves as) or if
he hasn't been in contact with anyone with a degree trying to buy a home in a metropolitan area.
Anyways, if our thesis is true, then if home prices declined in 2009, then college tuition should have declined as well, but
it didn't at most trustworthy schools. Prospective students kept lining up to pay more for education that many insiders believe
is "getting worse" because of widespread propaganda and a lack of alternatives, especially for "middle class" women.
It's hard to say, but there ought to be a power keg of students here primed to blow. And Bernie Sanders' proposal for free
college could be the fuse.
But first he'd have the light the fuse, and maybe he can. He's getting huge audiences and a lot of interest these days. And
here's a timely issue. What would happen if Sanders toured colleges and called for an angry, mass and extended student strike
across the country to launch on a certain date this fall or next spring to protest these obscene tuitions and maybe call for something
else concrete, like a maximum ratio of administrators to faculty for colleges to receive accreditation?
It could ignite not only a long-overdue movement on campuses but also give a big boost to his campaign. He'd have millions
of motivated and even furious students on his side as well as a lot of motivated and furious parents of students (my wife and
I would be among them) - and these are just the types of people likely to get out and vote in the primaries and general election.
Sanders' consistent message about the middle class is a strong one. But here's a solid, specific but very wide-ranging issue
that could bring that message into very sharp relief and really get a broad class of politically engaged people fired up.
I'm not one of those who think Sanders can't win but applaud his candidacy because it will nudge Hillary Clinton. I don't give
a fig about Clinton. I think there's a real chance Sanders can win not just the nomination but also the presidency. This country
is primed for a sharp political turn. Sanders could well be the right man in the right place and time. And this glaring and ongoing
tuition ripoff that EVERYONE agrees on could be the single issue that puts him front-and-center rather than on the sidelines.
I finished graduate school about three years ago. During the pre-graduate terms that I paid out of pocket (2005-2009) I saw
a near 70 percent increase in tuition (look up KY college tuition 1987-2009 for proof).
Straight bullshit, but remember our school was just following the national (Neoliberal) model.
Though, realize that I was 19-23 years old. Very immature (still immature) and feeling forces beyond my control. I did not
protest out of a) fear [?] (I don't know, maybe, just threw that in there) b) the sheepskin be the path to salvation (include
social/cultural pressures from parent, etc.).
I was more affected by b). This is the incredible power of our current Capitalist culture. It trains us well. We are always
speaking its language, as if a Classic. Appraising its world through its values.
I wished to protest (i.e. Occupy, etc.) but to which master? All of its targets are post modern, all of it, to me, nonsense,
and, because of this undead (unable to be destroyed). This coming from a young man, as I said, still immature, though I fear this
misdirection, and alienation is affecting us all.
"... "In many ways the effect of the crash on embezzlement was more significant than on suicide. To the economist embezzlement is
the most interesting of crimes. Alone among the various forms of larceny it has a time parameter. Weeks, months or years may elapse
between the commission of the crime and its discovery. (This is a period, incidentally, when the embezzler has his gain and the man
who has been embezzled, oddly enough, feels no loss. There is a net increase in psychic wealth.) ..."
"... At any given time there exists an inventory of undiscovered embezzlement in – or more precisely not in – the country's business
and banks. ..."
"... This inventory – it should perhaps be called the bezzle – amounts at any moment to many millions [trillions!] of dollars. It
also varies in size with the business cycle. ..."
"... In good times people are relaxed, trusting, and money is plentiful. But even though money is plentiful, there are always many
people who need more. Under these circumstances the rate of embezzlement grows, the rate of discovery falls off, and the bezzle increases
rapidly. ..."
"... In depression all this is reversed. Money is watched with a narrow, suspicious eye. The man who handles it is assumed to be
dishonest until he proves himself otherwise. Audits are penetrating and meticulous. Commercial morality is enormously improved. The
bezzle shrinks ..."
John Kenneth Galbraith, from "The Great Crash 1929":
"In many ways the effect of the crash on embezzlement was more significant than on suicide. To the economist embezzlement
is the most interesting of crimes. Alone among the various forms of larceny it has a time parameter. Weeks, months or years
may elapse between the commission of the crime and its discovery. (This is a period, incidentally, when the embezzler has his
gain and the man who has been embezzled, oddly enough, feels no loss. There is a net increase in psychic wealth.)
At any given time there exists an inventory of undiscovered embezzlement in – or more precisely not in – the country's
business and banks.
This inventory – it should perhaps be called the bezzle – amounts at any moment to many millions [trillions!] of dollars.
It also varies in size with the business cycle.
In good times people are relaxed, trusting, and money is plentiful. But even though money is plentiful, there are always
many people who need more. Under these circumstances the rate of embezzlement grows, the rate of discovery falls off, and the
bezzle increases rapidly.
In depression all this is reversed. Money is watched with a narrow, suspicious eye. The man who handles it is assumed
to be dishonest until he proves himself otherwise. Audits are penetrating and meticulous. Commercial morality is enormously
improved. The bezzle shrinks."
For nearly a half a century, from 1947 to 1996, real GDP and real Net Worth of Households and Non-profit Organizations (in
2009 dollars) both increased at a compound annual rate of a bit over 3.5%. GDP growth, in fact, was just a smidgen faster -- 0.016%
-- than growth of Net Household Worth.
From 1996 to 2015, GDP grew at a compound annual rate of 2.3% while Net Worth increased at the rate of 3.6%....
The real home price index extends from 1890. From 1890 to 1996, the index increased slightly faster than inflation so that
the index was 100 in 1890 and 113 in 1996. However from 1996 the index advanced to levels far beyond any previously experienced,
reaching a high above 194 in 2006. Previously the index high had been just above 130.
Though the index fell from 2006, the level in 2016 is above 161, a level only reached when the housing bubble had formed in
late 2003-early 2004.
The Shiller 10-year price-earnings ratio is currently 29.34, so the inverse or the earnings rate is 3.41%. The dividend yield
is 1.93. So an expected yearly return over the coming 10 years would be 3.41 + 1.93 or 5.34% provided the price-earnings ratio
stays the same and before investment costs.
Against the 5.34% yearly expected return on stock over the coming 10 years, the current 10-year Treasury bond yield is 2.32%.
The risk premium for stocks is 5.34 - 2.32 or 3.02%:
What the robot-productivity paradox is puzzles me, other than since 2005 for all the focus on the productivity of robots and
on robots replacing labor there has been a dramatic, broad-spread slowing in productivity growth.
However what the changing relationship between the growth of GDP and net worth since 1996 show, is that asset valuations have
been increasing relative to GDP. Valuations of stocks and homes are at sustained levels that are higher than at any time in the
last 120 years. Bear markets in stocks and home prices have still left asset valuations at historically high levels. I have no
idea why this should be.
The paradox is that productivity statistics can't tell us anything about the effects of robots on employment because both the
numerator and the denominator are distorted by the effects of colossal Ponzi bubbles.
John Kenneth Galbraith used to call it "the bezzle." It is "that increment to wealth that occurs during the magic interval
when a confidence trickster knows he has the money he has appropriated but the victim does not yet understand that he has lost
it." The current size of the gross national bezzle (GNB) is approximately $24 trillion.
Ponzilocks and the Twenty-Four Trillion Dollar Question
Twenty-three and a half trillion, actually. But what's a few hundred billion? Here today, gone tomorrow, as they say.
At the beginning of 2007, net worth of households and non-profit organizations exceeded its 1947-1996 historical average, relative
to GDP, by some $16 trillion. It took 24 months to wipe out eighty percent, or $13 trillion, of that colossal but ephemeral slush
fund. In mid-2016, net worth stood at a multiple of 4.83 times GDP, compared with the multiple of 4.72 on the eve of the Great
Unworthing.
When I look at the ragged end of the chart I posted yesterday, it screams "Ponzi!" "Ponzi!" "Ponz..."
To make a long story short, let's think of wealth as capital. The value of capital is determined by the present value of an
expected future income stream. The value of capital fluctuates with changing expectations but when the nominal value of capital
diverges persistently and significantly from net revenues, something's got to give. Either economic growth is going to suddenly
gush forth "like nobody has ever seen before" or net worth is going to have to come back down to earth.
Somewhere between 20 and 30 TRILLION dollars of net worth will evaporate within the span of perhaps two years.
When will that happen? Who knows? There is one notable regularity in the data, though -- the one that screams "Ponzi!"
When the net worth bubble stops going up...
...it goes down.
"... But the economy does not feel like one undergoing a technology-driven productivity boom. In the late 1990s, tech optimism was everywhere. At the same time, wages and productivity were rocketing upward. The situation now is completely different. The most recent jobs reports in America and Britain tell the tale. Employment is growing, month after month after month. But wage growth is abysmal. So is productivity growth: not surprising in economies where there are lots of people on the job working for low pay. ..."
"... Increasing labour costs by making the minimum wage a living wage would increase the incentives to boost productivity growth? No, the neoliberals and corporate Democrats would never go for it. They're trying to appeal to the business community and their campaign contributors wouldn't like it. ..."
People are worried about robots taking jobs. Driverless cars are around the corner. Restaurants and shops increasingly carry the
option to order by touchscreen. Google's clever algorithms provide instant translations that are remarkably good.
But the economy does not feel like one undergoing a technology-driven productivity boom. In the late 1990s, tech optimism
was everywhere. At the same time, wages and productivity were rocketing upward. The situation now is completely different. The most
recent jobs reports in America and Britain tell the tale. Employment is growing, month after month after month. But wage growth is
abysmal. So is productivity growth: not surprising in economies where there are lots of people on the job working for low pay.
The obvious conclusion, the one lots of people are drawing, is that the robot threat is totally overblown: the fantasy, perhaps,
of a bubble-mad Silicon Valley - or an effort to distract from workers' real problems, trade and excessive corporate power. Generally
speaking, the problem is not that we've got too much amazing new technology but too little.
This is not a strawman of my own invention. Robert Gordon makes this case. You can see Matt Yglesias make it here. Duncan Weldon,
for his part, writes:
We are debating a problem we don't have, rather than facing a real crisis that is the polar opposite. Productivity growth has
slowed to a crawl over the last 15 or so years, business investment has fallen and wage growth has been weak. If the robot revolution
truly was under way, we would see surging capital expenditure and soaring productivity. Right now, that would be a nice "problem"
to have. Instead we have the reality of weak growth and stagnant pay. The real and pressing concern when it comes to the jobs
market and automation is that the robots aren't taking our jobs fast enough.
And in a recent blog post Paul Krugman concluded:
I'd note, however, that it remains peculiar how we're simultaneously worrying that robots will take all our jobs and bemoaning
the stalling out of productivity growth. What is the story, really?
What is the story, indeed. Let me see if I can tell one. Last fall I published a book: "The Wealth of Humans". In it I set out
how rapid technological progress can coincide with lousy growth in pay and productivity. Start with this:
Low labour costs discourage investments in labour-saving technology, potentially reducing productivity growth.
Increasing labour costs by making the minimum wage a living wage would increase the incentives to boost productivity growth?
No, the neoliberals and corporate Democrats would never go for it. They're trying to appeal to the business community and their
campaign contributors wouldn't like it.
Capital-biased Technological Progress: An Example (Wonkish)
By Paul Krugman
Ever since I posted about robots and the distribution of income, * I've had queries from readers about what capital-biased
technological change – the kind of change that could make society richer but workers poorer – really means. And it occurred to
me that it might be useful to offer a simple conceptual example – the kind of thing easily turned into a numerical example as
well – to clarify the possibility. So here goes.
Imagine that there are only two ways to produce output. One is a labor-intensive method – say, armies of scribes equipped only
with quill pens. The other is a capital-intensive method – say, a handful of technicians maintaining vast server farms. (I'm thinking
in terms of office work, which is the dominant occupation in the modern economy).
We can represent these two techniques in terms of unit inputs – the amount of each factor of production required to produce
one unit of output. In the figure below I've assumed that initially the capital-intensive technique requires 0.2 units of labor
and 0.8 units of capital per unit of output, while the labor-intensive technique requires 0.8 units of labor and 0.2 units of
capital.
[Diagram]
The economy as a whole can make use of both techniques – in fact, it will have to unless it has either a very large amount
of capital per worker or a very small amount. No problem: we can just use a mix of the two techniques to achieve any input combination
along the blue line in the figure. For economists reading this, yes, that's the unit isoquant in this example; obviously if we
had a bunch more techniques it would start to look like the convex curve of textbooks, but I want to stay simple here.
What will the distribution of income be in this case? Assuming perfect competition (yes, I know, but let's deal with that case
for now), the real wage rate w and the cost of capital r – both measured in terms of output – have to be such that the cost of
producing one unit is 1 whichever technique you use. In this example, that means w=r=1. Graphically, by the way, w/r is equal
to minus the slope of the blue line.
Oh, and if you're worried, yes, workers and machines are both paid their marginal product.
But now suppose that technology improves – specifically, that production using the capital-intensive technique gets more efficient,
although the labor-intensive technique doesn't. Scribes with quill pens are the same as they ever were; server farms can do more
than ever before. In the figure, I've assumed that the unit inputs for the capital-intensive technique are cut in half. The red
line shows the economy's new choices.
So what happens? It's obvious from the figure that wages fall relative to the cost of capital; it's less obvious, maybe, but
nonetheless true that real wages must fall in absolute terms as well. In this specific example, technological progress reduces
the real wage by a third, to 0.667, while the cost of capital rises to 2.33.
OK, it's obvious how stylized and oversimplified all this is. But it does, I think, give you some sense of what it would mean
to have capital-biased technological progress, and how this could actually hurt workers.
Catherine Rampell and Nick Wingfield write about the growing evidence * for "reshoring" of manufacturing to the United States.
* They cite several reasons: rising wages in Asia; lower energy costs here; higher transportation costs. In a followup piece,
** however, Rampell cites another factor: robots.
"The most valuable part of each computer, a motherboard loaded with microprocessors and memory, is already largely made with
robots, according to my colleague Quentin Hardy. People do things like fitting in batteries and snapping on screens.
"As more robots are built, largely by other robots, 'assembly can be done here as well as anywhere else,' said Rob Enderle,
an analyst based in San Jose, California, who has been following the computer electronics industry for a quarter-century. 'That
will replace most of the workers, though you will need a few people to manage the robots.' "
Robots mean that labor costs don't matter much, so you might as well locate in advanced countries with large markets and good
infrastructure (which may soon not include us, but that's another issue). On the other hand, it's not good news for workers!
This is an old concern in economics; it's "capital-biased technological change," which tends to shift the distribution of income
away from workers to the owners of capital.
Twenty years ago, when I was writing about globalization and inequality, capital bias didn't look like a big issue; the major
changes in income distribution had been among workers (when you include hedge fund managers and CEOs among the workers), rather
than between labor and capital. So the academic literature focused almost exclusively on "skill bias", supposedly explaining the
rising college premium.
But the college premium hasn't risen for a while. What has happened, on the other hand, is a notable shift in income away from
labor:
[Graph]
If this is the wave of the future, it makes nonsense of just about all the conventional wisdom on reducing inequality. Better
education won't do much to reduce inequality if the big rewards simply go to those with the most assets. Creating an "opportunity
society," or whatever it is the likes of Paul Ryan etc. are selling this week, won't do much if the most important asset you can
have in life is, well, lots of assets inherited from your parents. And so on.
I think our eyes have been averted from the capital/labor dimension of inequality, for several reasons. It didn't seem crucial
back in the 1990s, and not enough people (me included!) have looked up to notice that things have changed. It has echoes of old-fashioned
Marxism - which shouldn't be a reason to ignore facts, but too often is. And it has really uncomfortable implications.
But I think we'd better start paying attention to those implications.
John Kenneth Galbraith, from "The Great Crash 1929":
"In many ways the effect of the crash on embezzlement was more significant than on suicide. To the economist embezzlement
is the most interesting of crimes. Alone among the various forms of larceny it has a time parameter. Weeks, months or years
may elapse between the commission of the crime and its discovery. (This is a period, incidentally, when the embezzler has his
gain and the man who has been embezzled, oddly enough, feels no loss. There is a net increase in psychic wealth.)
At any given time there exists an inventory of undiscovered embezzlement in – or more precisely not in – the country's business
and banks.
This inventory – it should perhaps be called the bezzle – amounts at any moment to many millions [trillions!] of dollars.
It also varies in size with the business cycle.
In good times people are relaxed, trusting, and money is plentiful. But even though money is plentiful, there are always
many people who need more. Under these circumstances the rate of embezzlement grows, the rate of discovery falls off, and the
bezzle increases rapidly.
In depression all this is reversed. Money is watched with a narrow, suspicious eye.
The man who handles it is assumed to be dishonest until he proves himself otherwise. Audits are penetrating and meticulous.
Commercial morality is enormously improved. The bezzle shrinks."
For nearly a half a century, from 1947 to 1996, real GDP and real Net Worth of Households and Non-profit Organizations (in
2009 dollars) both increased at a compound annual rate of a bit over 3.5%. GDP growth, in fact, was just a smidgen faster -- 0.016%
-- than growth of Net Household Worth.
From 1996 to 2015, GDP grew at a compound annual rate of 2.3% while Net Worth increased at the rate of 3.6%....
The real home price index extends from 1890. From 1890 to 1996, the index increased slightly faster than inflation so that
the index was 100 in 1890 and 113 in 1996. However from 1996 the index advanced to levels far beyond any previously experienced,
reaching a high above 194 in 2006. Previously the index high had been just above 130.
Though the index fell from 2006, the level in 2016 is above 161, a level only reached when the housing bubble had formed in
late 2003-early 2004.
The Shiller 10-year price-earnings ratio is currently 29.34, so the inverse or the earnings rate is 3.41%. The dividend yield
is 1.93. So an expected yearly return over the coming 10 years would be 3.41 + 1.93 or 5.34% provided the price-earnings ratio
stays the same and before investment costs.
Against the 5.34% yearly expected return on stock over the coming 10 years, the current 10-year Treasury bond yield is 2.32%.
The risk premium for stocks is 5.34 - 2.32 or 3.02%:
What the robot-productivity paradox is puzzles me, other than since 2005 for all the focus on the productivity of robots and
on robots replacing labor there has been a dramatic, broad-spread slowing in productivity growth.
However what the changing relationship between the growth of GDP and net worth since 1996 show, is that asset valuations have
been increasing relative to GDP. Valuations of stocks and homes are at sustained levels that are higher than at any time in the
last 120 years. Bear markets in stocks and home prices have still left asset valuations at historically high levels. I have no
idea why this should be.
The paradox is that productivity statistics can't tell us anything about the effects of robots on employment because both the
numerator and the denominator are distorted by the effects of colossal Ponzi bubbles.
John Kenneth Galbraith used to call it "the bezzle." It is "that increment to wealth that occurs during the magic interval
when a confidence trickster knows he has the money he has appropriated but the victim does not yet understand that he has lost
it." The current size of the gross national bezzle (GNB) is approximately $24 trillion.
Ponzilocks and the Twenty-Four Trillion Dollar Question
Twenty-three and a half trillion, actually. But what's a few hundred billion? Here today, gone tomorrow, as they say.
At the beginning of 2007, net worth of households and non-profit organizations exceeded its 1947-1996 historical average, relative
to GDP, by some $16 trillion. It took 24 months to wipe out eighty percent, or $13 trillion, of that colossal but ephemeral slush
fund. In mid-2016, net worth stood at a multiple of 4.83 times GDP, compared with the multiple of 4.72 on the eve of the Great
Unworthing.
When I look at the ragged end of the chart I posted yesterday, it screams "Ponzi!" "Ponzi!" "Ponz..."
To make a long story short, let's think of wealth as capital. The value of capital is determined by the present value of an
expected future income stream. The value of capital fluctuates with changing expectations but when the nominal value of capital
diverges persistently and significantly from net revenues, something's got to give. Either economic growth is going to suddenly
gush forth "like nobody has ever seen before" or net worth is going to have to come back down to earth.
Somewhere between 20 and 30 TRILLION dollars of net worth will evaporate within the span of perhaps two years.
When will that happen? Who knows? There is one notable regularity in the data, though -- the one that screams "Ponzi!"
When the net worth bubble stops going up...
...it goes down.
"... The special command '"sudoedit"' allows users to run sudo with the -e flag or as the command sudoedit . If you include command line arguments in a command in an alias these must exactly match what the user enters on the command line. If you include any of the following they will need to be escaped with a backslash (\): ",", "\", ":", "=". ..."
There are four kinds of aliases: User_Alias, Runas_Alias, Host_Alias and Cmnd_Alias. Each
alias definition is of the form:
Alias_Type NAME = item1, item2, ...
Where Alias_Type is one of User_Alias, Runas_Alias, Host_Alias or Cmnd_Alias. A name is a
string of uppercase letters, numbers and underscores starting with an uppercase letter. You can
put several aliases of the same type on one line by separating them with colons (:) as so:
Alias_Type NAME1 = item1, item2 : NAME2 = item3
You can include other aliases in an alias specification provided they would normally fit
there. For example you can use a user alias wherever you would normally expect to see a list of
users (for example in a user or runas alias).
There are also built in aliases called ALL which match everything where they are used. If
you used ALL in place of a user list it matches all users for example. If you try and set an
alias of ALL it will be overridden by this built in alias so don't even try.
User
Aliases
User aliases are used to specify groups of users. You can specify usernames, system groups
(prefixed by a %) and netgroups (prefixed by a +) as follows:
# Everybody in the system group "admin" is covered by the alias ADMINS
User_Alias ADMINS = %admin
# The users "tom", "dick", and "harry" are covered by the USERS alias
User_Alias USERS = tom, dick, harry
# The users "tom" and "mary" are in the WEBMASTERS alias
User_Alias WEBMASTERS = tom, mary
# You can also use ! to exclude users from an alias
# This matches anybody in the USERS alias who isn't in WEBMASTERS or ADMINS aliases
User_Alias LIMITED_USERS = USERS, !WEBMASTERS, !ADMINS
Runas Aliases
Runas Aliases are almost the same as user aliases but you are allowed to specify users by
uid's. This is helpful as usernames and groups are matched as strings so two users with the
same uid but different usernames will not be matched by entering a single username but can be
matched with a uid. For example:
# UID 0 is normally used for root
# Note the hash (#) on the following line indicates a uid, not a comment.
Runas_Alias ROOT = #0
# This is for all the admin users similar to the User_Alias of ADMINS set earlier
# with the addition of "root"
Runas_Alias ADMINS = %admin, root
Host Aliases
A host alias is a list of hostname, ip addresses, networks and netgroups (prefixed with a
+). If you do not specify a netmask with a network the netmask of the hosts ethernet
interface(s) will be used when matching.
# This is all the servers
Host_Alias SERVERS = 192.168.0.1, 192.168.0.2, server1
# This is the whole network
Host_Alias NETWORK = 192.168.0.0/255.255.255.0
# And this is every machine in the network that is not a server
Host_Alias WORKSTATIONS = NETWORK, !SERVER
# This could have been done in one step with
# Host_Alias WORKSTATIONS = 192.168.0.0/255.255.255.0, !SERVERS
# but I think this method is clearer.
Command Aliases
Command aliases are lists of commands and directories. You can use this to specify a group
of commands. If you specify a directory it will include any file within that directory but not
in any subdirectories.
The special command '"sudoedit"' allows users to run sudo with the -e flag or as
the command sudoedit . If you include command line arguments in a command in an alias
these must exactly match what the user enters on the command line. If you include any of the
following they will need to be escaped with a backslash (\): ",", "\", ":", "=".
User Specifications are where the sudoers file sets who can run what as who. It is the key
part of the file and all the aliases have just been set up for this very point. If this was a
film this part is where all the key threads of the story come together in the glorious
unveiling before the final climatic ending. Basically it is important and without this you
ain't going anywhere.
The user list is a list of users or a user alias that has already been set, the host list is
a list of hosts or a host alias, the operator list is a list of users they must be running as
or a runas alias and the command list is a list of commands or a cmnd alias.
The tag list has not been covered yet and allows you set special things for each command.
You can use PASSWD and NOPASSWD to specify whether the user has to enter a password or not and
you can also use NOEXEC to prevent any programs launching shells themselves (as once a program
is running with sudo it has full root privileges so could launch a root shell to circumvent any
restrictions in the sudoers file.
For example (using the aliases and users from earlier)
# This lets the webmasters run all the web commands on the machine
# "webserver" provided they give a password
WEBMASTERS webserver= WEB_CMDS
# This lets the admins run all the admin commands on the servers
ADMINS SERVERS= ADMIN_CMDS
# This lets all the USERS run admin commands on the workstations provided
# they give the root password or and admin password (using "sudo -u <username>")
USERS WORKSTATIONS=(ADMINS) ADMIN_CMDS
# This lets "harry" shutdown his own machine without a password
harry harrys-machine= NOPASSWD: SHUTDOWN_CMDS
# And this lets everybody print without requiring a password
ALL ALL=(ALL) NOPASSWD: PRINTING_CMDS
The Default Ubuntu Sudoers File
The sudoers file that ships with Ubuntu 8.04 by default is included here so if you break
everything you can restore it if needed and also to highlight some key things.
# /etc/sudoers
#
# This file MUST be edited with the 'visudo' command as root.
#
# See the man page for details on how to write a sudoers file.
#
Defaults env_reset
# Uncomment to allow members of group sudo to not need a password
# %sudo ALL=NOPASSWD: ALL
# Host alias specification
# User alias specification
# Cmnd alias specification
# User privilege specification
root ALL=(ALL) ALL
# Members of the admin group may gain root privileges
%admin ALL=(ALL) ALL
This is pretty much empty and only has three rules in it. The first ( Defaults
env_reset ) resets the terminal environment after switching to root. So, ie: all user set
variables are removed. The second ( root ALL=(ALL) ALL ) just lets root do everything
on any machine as any user. And the third ( %admin ALL=(ALL) ALL ) lets anybody in the
admin group run anything as any user. Note that they will still require a password (thus giving
you the normal behaviour you are so used to).
If you want to add your own specifications and you are a member of the admin group then you
will need to add them after this line. Otherwise all your changes will be overridden by this
line saying you (as part of the admin group) can do anything on any machine as any user
provided you give a password.
Common Tasks
This section includes some common tasks and how to accomplish them using the sudoers
file.
Shutting Down From The Console Without A Password
Often people want to be able to shut their computers down without requiring a password to do
so. This is particularly useful in media PCs where you want to be able to use the shutdown
command in the media centre to shutdown the whole computer.
To do this you need to add some cmnd aliases as follows:
You also need to add a user specification (at the end of the file after the " %admin ALL
= (ALL) ALL " line so it takes effect - see above for details):
<your username> ALL=(ALL) NOPASSWD: SHUTDOWN_CMDS
Obviously you need to replace "<your username>" with the username of the user who
needs to be able to shutdown the pc without a password. You can use a user alias here as
normal.
Multiple tags on a line
There are times where you need to have both NOPASSWD and NOEXEC or other tags on the same
configuration line. The man page for sudoers is less than clear, so here is an example of how
this is done:
myuser ALL = (root) NOPASSWD:NOEXEC: /usr/bin/vim
This example lets the user "myuser" run as root the "vim" binary without a password, and
without letting vim shell out (the :shell command).
Enabling Visual Feedback when Typing
Passwords
As of Ubuntu 10.04 (Lucid), you can enable visual feedback when you are typing a password at
a sudo prompt.
Simply edit /etc/sudoers and change the Defaults line to read:
Defaults env_reset,pwfeedback
Troubleshooting
If your changes don't seem to have had any effect, check that they are not trying to use
aliases that are not defined yet and that no other user specifications later in the file are
overriding what you are trying to accomplish.
"... A command may also be the full path to a directory (including a trailing /). This permits execution of all the files in that directory, but not in any subdirectories. ..."
"... The keyword sudoedit is also recognised as a command name, and arguments can be specified as with other commands. Use this instead of allowing a particular editor to be run with sudo, because it runs the editor as the user and only installs the editor's output file into place as root (or other target user). ..."
The /etc/sudoers file contains "user specifications" that define the commands that users may
execute. When sudo is invoked, these specifications are checked in order, and the last match is
used. A user specification looks like this at its most basic:
User Host = (Runas) Command
Read this as "User may run Command as the Runas user on Host".
Any or all of the above may be the special keyword ALL, which always matches.
User and Runas may be usernames, group names prefixed with %, numeric UIDs prefixed with #,
or numeric GIDs prefixed with %#. Host may be a hostname, IP address, or a whole network (e.g.,
192.0.2.0/24), but not 127.0.0.1.
Runas
This optional clause controls the target user (and group) sudo will run the Command as, or
in other words, which combinations of the -u and -g arguments it will accept.
If the clause is omitted, the user will be permitted to run commands only as root. If you
specify a username, e.g., (postgres), sudo will accept "-u postgres" and run commands as that
user. In both cases, sudo will not accept -g.
If you also specify a target group, e.g., (postgres:postgres), sudo will accept any
combination of the listed users and groups (see the section on aliases below). If you specify
only a target group, e.g., (:postgres), sudo will accept and act on "-g postgres" but run
commands only as the invoking user.
This is why you sometimes see (ALL:ALL) in the 90% of examples.
Commands
In the simplest case, a command is the full path to an executable, which permits it to be
executed with any arguments. You may specify a list of arguments after the path to permit the
command only with those exact arguments, or write "" to permit execution only without any
arguments.
A command may also be the full path to a directory (including a trailing /). This permits
execution of all the files in that directory, but not in any subdirectories.
The keyword sudoedit is also recognised as a command name, and arguments can be specified as
with other commands. Use this instead of allowing a particular editor to be run with sudo,
because it runs the editor as the user and only installs the editor's output file into place as
root (or other target user).
As shown above, comma-separated lists of commands and aliases may be specified. Commands may
also use shell wildcards either in the path or in the argument list (but see the warning below
about the latter).
Sudo is very flexible, and it's tempting to set up very fine-grained access, but it can be
difficult to understand the consequences of a complex setup, and you can end up with unexpected
problems . Try to keep things simple.
Options
Before the command, you can specify zero or more options to control how it will be executed.
The most important options are NOPASSWD (to not require a password) and SETENV (to allow the
user to set environment variables for the command).
ams ALL=(ALL) NOPASSWD: SETENV: /bin/ls
Other available options include NOEXEC, LOG_INPUT and LOG_OUTPUT, and SELinux role and type
specifications. These are all documented in the manpage.
Digests
The path to a binary (i.e., not a directory or alias) may also be prefixed with a
digest:
The specified binary will then be executed only if it matches the digest. SHA-2 digests of
224, 256, 384, and 512-bits are accepted in hex or Base64 format. The values can be generated
using, e.g., sha512sum or openssl.
Aliases
In addition to the things listed above, a User, Host, Runas, or Command may be an alias,
which is a named list of comma-separated values of the corresponding type. An alias may be used
wherever a User, Host, Runas, or Command may occur. They are always named in uppercase, and can
be defined as shown in these examples:
An alias definition can also include another alias of the same type (e.g., LEGACYUSERS
above). You cannot include options like NOPASSWD: in command aliases.
Any term in a list may be prefixed with ! to negate it. This can be used to include a group
but exclude a certain user, or to exclude certain addresses in a network, and so on. Negation
can also be used in command lists, but note the manpage's warning that trying to "subtract"
commands from ALL using ! is generally not effective .
Use aliases whenever you need rules involving multiple users, hosts, or
commands.
Default options
Sudo has a number of options whose values may be set in the configuration file, overriding
the defaults either unconditionally, or only for a given user, host, or command. The defaults
are sensible, so you do not need to care about options unless you're doing something
special.
Option values are specified in one or more "Defaults" lines. The example below switches on
env_reset, turns off insults (read !insults as "not insults"), sets password_tries to 4, and so
on. All the values are set unconditionally, i.e. they apply to every user specification.
Defaults env_reset, !insults, password_tries=4, \
lecture=always
Defaults passprompt="Password for %p:"
Options may also be set only for specific hosts, users, or commands, as shown below.
Defaults@host sets options for a host, Defaults:user for a (requesting) user, Defaults!command
for a command, and Defaults>user for a target user. You can also use aliases in these
definitions.
Unconditional defaults are parsed first, followed by host and user defaults, then runas
defaults, then command defaults.
The many available options are explained well in the
manpage.
Complications
In addition to the alias mechanism, a User, Host, Runas, or Command may each be a
comma-separated list of things of the corresponding type. Also, a user specification may
contain multiple host and command sets for a single User. Please be sparing in your use of this
syntax, in case you ever have to make sense of it again.
Users and hosts can also be a +netgroup or other more esoteric things, depending on plugins.
Host names may also use shell wildcards (see the fqdn option).
If Runas is omitted but the () are not, sudo will reject -u and -g and run commands only as
the invoking user.
You can use wildcards in command paths and in arguments, but their meaning is different. In
a path, a * will not match a /, so /usr/bin/* will match /usr/bin/who but not
/usr/bin/X11/xterm. In arguments, a * does match /; also, arguments are matched as a
single string (not a list of separate words), so * can match across words. The manpage includes
the following problematic example, which permits additional arguments to be passed to /bin/cat
without restriction:
%operator ALL = /bin/cat /var/log/messages*
Warning : Sudo will not work if /etc/sudoers contains syntax errors, so you should
only ever edit it using visudo, which performs basic sanity checks, and installs the new file
only if it parses correctly.
Another warning: if you take the EBNF in the manpage seriously enough, you will discover
that the implementation doesn't follow it. You can avoid this sad fate by linking to this
article instead of trying to write your own. Happy sudoing!
20 Sed (Stream Editor) Command Examples for Linux Users
by Pradeep Kumar · Published November 9, 2017 · Updated
November 9, 2017
Sed command or Stream Editor is very powerful utility offered by Linux/Unix
systems. It is mainly used for text substitution , find & replace but it can also perform other text manipulations like insertion
deletion search etc. With SED, we can edit complete files without actually having to open it. Sed also supports the use of regular
expressions, which makes sed an even more powerful test manipulation tool
In this article, we will learn to use SED command with the help some examples. Basic syntax for using sed command is,
sed OPTIONS [SCRIPT] [INPUTFILE ]
Now let's see some examples.
Example :1) Displaying partial text of a file
With sed, we can view only some part of a file rather than seeing whole file. To see some lines of the file, use the following
command,
[linuxtechi@localhost ~]$ sed -n 22,29p testfile.txt
here, option 'n' suppresses printing of whole file & option 'p' will print only line lines from 22 to 29.
Example :2) Display all except some lines
To display all content of a file except for some portion, use the following command,
[linuxtechi@localhost ~]$ sed 22,29d testfile.txt
Option 'd' will remove the mentioned lines from output.
Example :3) Display every 3rd line starting with Nth line
Do display content of every 3rd line starting with line number 2 or any other line, use the following command
[linuxtechi@localhost ~]$ sed -n '2-3p' file.txt
Example :4 ) Deleting a line using sed command
To delete a line with sed from a file, use the following command,
[linuxtechi@localhost ~]$ sed Nd testfile.txt
where 'N' is the line number & option 'd' will delete the mentioned line number. To delete the last line of the file, use
[linuxtechi@localhost ~]$ sed $d testfile.txt
Example :5) Deleting a range of lines
To delete a range of lines from the file, run
[linuxtechi@localhost ~]$ sed '29-34d' testfile.txt
This will delete lines 29 to 34 from testfile.txt file.
Example :6) Deleting lines other than the mentioned
To delete lines other than the mentioned lines from a file, we will use '!'
[linuxtechi@localhost ~]$ sed '29-34!d' testfile.txt
here '!' option is used as not, so it will reverse the condition i.e. will not delete the lines mentioned. All the lines other
29-34 will be deleted from the files testfile.txt.
Example :7) Adding Blank lines/spaces
To add a blank line after every non-blank line, we will use option 'G',
[linuxtechi@localhost ~]$ sed G testfile.txt
Example :8) Search and Replacing a string using sed
To search & replace a string from the file, we will use the following example,
[linuxtechi@localhost ~]$ sed 's/danger/safety/' testfile.txt
here option 's' will search for word 'danger' & replace it with 'safety' on every line for the first occurrence only.
Example :9) Search and replace a string from whole file using sed
To replace the word completely from the file, we will use option 'g' with 's',
[linuxtechi@localhost ~]$ sed 's/danger/safety/g' testfile.txt
Example :10) Replace the nth occurrence of string pattern
We can also substitute a string on nth occurrence from a file. Like replace 'danger' with 'safety' only on second occurrence,
[linuxtechi@localhost ~]$ sed 's/danger/safety/2' testfile.txt
To replace 'danger' on 2nd occurrence of every line from whole file, use
[linuxtechi@localhost ~]$ sed 's/danger/safety/2g' testfile.txt
Example :11) Replace a string on a particular line
To replace a string only from a particular line, use
[linuxtechi@localhost ~]$ sed '4 s/danger/safety/' testfile.txt
This will only substitute the string from 4th line of the file. We can also mention a range of lines instead of a single line,
[linuxtechi@localhost ~]$ sed '4-9 s/danger/safety/' testfile.txt
Example :12) Add a line after/before the matched search
To add a new line with some content after every pattern match, use option 'a' ,
[linuxtechi@localhost ~]$ sed '/danger/a "This is new line with text after match"' testfile.txt
To add a new line with some content a before every pattern match, use option 'i',
[linuxtechi@localhost ~]$ sed '/danger/i "This is new line with text before match" ' testfile.txt
Example :13) Change a whole line with matched pattern
To change a whole line to a new line when a search pattern matches we need to use option 'c' with sed,
[linuxtechi@localhost ~]$ sed '/danger/c "This will be the new line" ' testfile.txt
So when the pattern matches 'danger', whole line will be changed to the mentioned line.
Advanced options with sed
Up until now we were only using simple expressions with sed, now we will discuss some advanced uses of sed with regex,
Example :14) Running multiple sed commands
If we need to perform multiple sed expressions, we can use option 'e' to chain the sed commands,
[linuxtechi@localhost ~]$ sed -e 's/danger/safety/g' -e 's/hate/love/' testfile.txt
Example :15) Making a backup copy before editing a file
To create a backup copy of a file before we edit it, use option '-i.bak',
[linuxtechi@localhost ~]$ sed -i.bak -e 's/danger/safety/g' testfile.txt
This will create a backup copy of the file with extension .bak. You can also use other extension if you like.
Example :16) Delete a file line starting with & ending with a pattern
To delete a file line starting with a particular string & ending with another string, use
[linuxtechi@localhost ~]$ sed -e 's/danger.*stops//g' testfile.txt
This will delete the line with 'danger' on start & 'stops' in the end & it can have any number of words in between , '.*' defines
that part.
Example :17) Appending lines
To add some content before every line with sed & regex, use
[linuxtechi@localhost ~]$ sed -e 's/.*/testing sed &/' testfile.txt
So now every line will have 'testing sed' before it.
Example :18) Removing all commented lines & empty lines
To remove all commented lines i.e. lines with # & all the empty lines, use
[linuxtechi@localhost ~]$ sed -e 's/#.*//;/^$/d' testfile.txt
To only remove commented lines, use
[linuxtechi@localhost ~]$ sed -e 's/#.*//' testfile.txt
Example :19) Get list of all usernames from /etc/passwd
To get the list of all usernames from /etc/passwd file, use
[linuxtechi@localhost ~]$ sed 's/\([^:]*\).*/\1/' /etc/passwd
a complete list all usernames will be generated on screen as output.
Example :20) Prevent overwriting of system links with sed command
'sed -i' command has been know to remove system links & create only regular files in place of the link file. So to avoid such
a situation & prevent ' sed -i ' from destroying the links, use ' –follow-symklinks ' options with the command being executed.
Let's assume i want to disable SELinux on CentOS or RHEL Severs
[linuxtechi@localhost ~]# sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
These were some examples to show sed, we can use these reference to employ them as & when needed. If you guys have any queries
related to this or any article, do share with us.
Nagios is a monitoring tool under GPL licence. This tool lets you monitor servers, network
hardware (switches, routers, ...) and applications. A lot of plugins are available and its big
community makes Nagios the biggest open source monitoring tool. This tutorial shows how to
install Nagios 3.4.4 on CentOS 6.3.
Prerequisites
After installing your CentOS server, you have to disable selinux & install some packages
to make nagios work.
To disable selinux, open the file: /etc/selinux/config
# vi /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=permissive // change this value to disabled
# SELINUXTYPE= can take one of these two values:
# targeted - Targeted processes are protected,
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
# cd ..
# tar xvzf nagios-plugins-1.4.15.tar.gz
# cd nagios-plugins-1.4.15
# ./configure
# make
# make install
Start the apache service and enable it on boot:
# service httpd start
# chkconfig httpd on
Now, connect to your nagios system:
http://Your-Nagios-IP/nagios and enter login : nagiosadmin & password you have chosen
above.
And after the installation ?
After the installation you have to configure all your host & services in nagios
configuration files.This step is performed in command line and is complicated, so I recommand
to install tool like Centreon, that is a beautiful front-end to add you host &
services.
1. Before installing Nagios Core from sources in Ubuntu or Debian , first install the
following LAMP stack components in your system, without MySQL RDBMS database component, by
issuing the below command.
2. On the next step, install the following system dependencies and utilities required to
compile and install Nagios Core from sources, by issuing the follwoing command.
# apt install wget unzip zip autoconf gcc libc6 make apache2-utils libgd-dev
Step 2: Install Nagios 4 Core in Ubuntu and Debian
3. On the first step, create nagios system user and group and add nagios account to the
Apache www-data user, by issuing the below commands.
# useradd nagios
# usermod -a -G nagios www-data
4. After all dependencies, packages and system requirements for compiling Nagios from
sources are present in your system, go to Nagios webpage and grab the latest version of Nagios Core stable source
archive by issuing the following command.
5. Next, extract Nagios tarball and enter the extracted nagios directory, with the following
commands. Issue ls
command to list nagios directory content.
# tar xzf nagios-4.3.4.tar.gz
# cd nagios-4.3.4/
# ls
List Nagios Content
6. Now, start to compile Nagios from sources by issuing the below commands. Make sure you
configure Nagios with Apache sites-enabled directory configuration by issuing the below
command.
7. In the next step, build Nagios files by issuing the following command.
# make all
8. Now, install Nagios binary files, CGI scripts and HTML files by issuing the following
command.
# make install
9. Next, install Nagios daemon init and external command mode configuration files and make
sure you enable nagios daemon system-wide by issuing the following commands.
# make install-init
# make install-commandmode
# systemctl enable nagios.service
10. Next, run the following command in order to install some Nagios sample configuration
files needed by Nagios to run properly by issuing the below command.
# make install-config
11. Also, install Nagios configuration file for Apacahe web server, which can be fount in
/etc/apacahe2/sites-enabled/ directory, by executing the below command.
# make install-webconf
12. Next, create nagiosadmin account and a password for this account necessary by Apache
server to log in to Nagios web panel by issuing the following command.
13. To allow Apache HTTP server to execute Nagios cgi scripts and to access Nagios admin
panel via HTTP, first enable cgi module in Apache and then restart Apache service and start and
enable Nagios daemon system-wide by issuing the following commands.
14. Finally, log in to Nagios Web Interface by pointing a browser to your server's IP
address or domain name at the following URL address via HTTP protocol. Log in to Nagios with
nagiosadmin user the password setup with htpasswd script.
This is an example of the contents of the sudoers file is located in the
/etc directory of the UNIX target computer. This example contains sample
configurations required to use the sudo functionality as mentioned in the section Using sudo
functionality for querying Oracle UNIX targets .
Over time, your sudoers file will grow with more and more entries, which is to be expected. This could be because more application
environments are being placed on the server, or because of splitting the delegation of currents tasks down further to segregate responsibility.
With many entries, typos can occur, which is common. Making the sudoers file more manageable by the root user makes good administrative
sense. Let's look at two ways this can be achieved, or at least a good standard to build on. If you have many static entries (meaning
the same command is run on every machine where sudo is), put these into a separate sudoers file, which can be achieved using the
include directive.
Having many entries for individual users can also be time consuming when adding or amending entries. With many user entries, it
is good practice to put these into groups. Using groups, you can literally group users together, and the groups are valid AIX groups.
Now look at these two methods more closely.
Include file
Within large-enterprise environments, keeping the sudoers file maintained is an important and regularly required task. A solution
to make this chore easier is to reorganize the sudoers file. One way to do this is to extract entries that are static or reusable,
where the same commands are run on every box. Like audit/security or storix backups or general performance reports, with sudo you
can now use the include directive. The main sudoers file can then contain the local entries, and the include file would barely need
editing as those entries are static. When visudo is invoked, it will scan sudoers when it sees the include entry. It will scan that
file, then come back to the main sudoers and carry on scanning. In reality, it works like this. When you exit out of visudo from
the main sudoers file, it will take you to the include file for editing. Once you quit the include, you are back to the AIX prompt.
You can have more than one include file, but I cannot think of a reason why you would want more than one.
Let's call our secondary sudoers file sudo_static.<hostname>. In the examples in this demonstration the hostname I am using is
rs6000. In the main sudoers file, make the entry as follows:
1
#include /etc/sudo_static.rs6000
Next, add some entries to the /etc/sudo_static.rs6000 file. You do not have to put in all the sudoers directives or stanzas. If
this file contains entries where they are not required, don't include them. For example, my include file contains only the following
text, and nothing more.
You can use the %h, instead of typing the actual hostname:
I personally do not use this method because I have experienced returning extra characters on the hostname. This issue is fixed
in sudo 1.7.2 p1.
When you run visudo, and you save and quit the file, visudo will inform you to click Enter to edit the include sudoers file. Once
you have edited the file, sudo will pick up on syntax errors if any, as with the main file. Alternatively, to edit the include file
directly, use:
1
visudo -f /etc/sudo_static.rs6000
Using groups
Users belonging to a valid AIX group can be included in sudoers, making the sudoers file more manageable with fewer entries per
user. When reorganizing the sudoers entries to include groups, you may have to create a new groups under AIX to include users that
are only allowed to use sudo for certain commands. To use groups, simply prefix the entries with a '%'. Assume you have groups called
devops and devuat , and with those groups you have the following users:
1 2 3 4 5 6 7 8
# lsgroup -f -a users devopsdevops:users=joex,delta,charlie,tstgn# lsgroup
-f -a users devuatdevuat:users=zebra,spsys,charlie
For the group devops to be allowed to run the /usr/local/bin/data_ext.sh command as dbdftst.
For the group devuat to be allowed to run the commands :/usr/local/bin/data_mvup.sh, /usr/local/bin/data_rep.sh as dbukuat.
Notice in the previous entries, the group devops users will not be prompted for their password when executing /usr/local/bin/data_ext.sh;
however, the group devuat users will be prompted for their password. User "charlie" is a member of both groups ( devops
and devuat ), so he can execute all the above commands.
Timeout with sudo
Sudo has a feature that uses time tickets to determine how long since the last sudo command was run. During this time period,
the user can re-run the command without being prompted for the password (that's the user's own password). Once this time allotment
has ended, the user is prompted for the password again to re-run the command. If the user gives the correct password, the command
is executed, the ticket is then re-set, and the time clock starts all over again. The ticket feature will not work if you have NOPASSWD
in the user's entry in sudoers. The default timeout is five minutes. If you wish to change the default value, simply put an entry
in sudoers. For example, to set the timeout value for user "bravo" on any commands he runs to 20 minutes, you could use:
1
Defaults:bravo timestamp_timeout=20
To destroy the ticket, as the user, use:
1
$ sudo -k
When the ticket is destroyed, the user will be prompted for his password again, when running a sudo command.
Please do not set the timeout value for all users, as this will cause problems, especially when running jobs in batch and the
batch takes longer to run than normal. To disable this feature, use the value -1 in the timestamp_timeout variable. The
time tickets are directory entries with the name of the user located in /var/run/sudo.
Those variables
As discussed earlier, sudo will strip out potentially dangerous system variables. To check out what variables are kept and which
ones are stripped, use sudo -V . The output will give you a listing of preserved and stripped variables. Stripping out the LIBPATH
is clearly an inconvenience. There are a couple of ways around this--either write a wrapper script or specify the environments on
the command line. Looking at the wrapper script solution first, suppose you have an application that stops or starts a DB2® instance.
You could create a bare-bones script that would keep the variables intact. In
Listing 1. rc.db2 , notice that you
source the instance profile, which in turn exports various LIBPATH and DB2 environment variables, keeping the environment variable
intact, by using:
1
. /home/$inst/sqllib/db2profile
For completeness, the entries in sudoers to execute this is and not strip out any system environment variables are:
If you do not put the !env_reset entry in, you will get the following error from sudo when you try to run the command:
1
sudo: sorry, you are not allowed to set the following environment variables: LIBPATH
If you find that sudo is also stripping out other environment variables, you can specify the variable name in sudoers so that
sudo keeps those variables intact (with the Defaults env_keep += directive). For instance, suppose sudo was stripping out the application
variables DSTAGE_SUP and DSTAGE_META from one of my suodo-ised scripts. To preserve these variables, I could put the following entries
in sudoers:
Now when the sudo script is executed, the above environment variables are preserved.
Securing the sudo path
A default PATH within sudoers can be imposed using the secure_path directive. This directive specifies where to look for binaries
and commands when a user executes a sudo command. This option clearly tries to lock down specific areas where a user runs a sudo
command, which is good practice. Use the following directive in sudoers, specifying the secure PATH with its search directories:
Restrictions can be put in place to restrict certain commands to users. Assume you have a group called dataex , whose
members are "alpha," "bravo," and "charlie." Now, that group has been allowed to run the sudo command /usr/local/bin/mis_ext * ,
where the asterisk represents the many parameters passed to the script. However, user "charlie" is not allowed to execute that script
if the parameter is import . This type of condition can be met by using the logical NOT '!' operator. Here is how that is achieved
in sudoers:
Note that the logical NOT operator entries go after the non-restrictive entry. Many conditional NOT entries can be applied on
the same line; just make sure that they are comma separated, like so:
When in visudo, do not think just saving the sudo entry and staying in visudo will make the changes effective; it won't. You must
exit visudo for the changes to take effect. Rolling out sudo commands
Rolling out sudo commands to remote hosts in an enterprise environment is best done using a ssh script as root, and the keys should
have been exchanged between the hosts, for password-less logins. Let's look at one example of how to do this. With geographically
remote machines, if you get a hardware issue of some sort (disk or memory), the IBM® engineer will be on-site to replace the failing
hardware. There will be occasions when they require the root password to carry out their task. One procedure you might want to put
in place is for the engineer to gain access to root they must use sudo. Informing the engineer prior to the visit of the password
would be advantageous. Listing 2 demonstrates
one way you could roll out this configuration. Looking more closely at
Listing 2 , use a for loop containing
a list of hosts you are pushing out to. (Generally, though, you would have these hosts in a text file and read them in using a while
loop.) Using the 'here' document method, make a backup copy of sudoers, and an entry is then appended to sudoers, like so:
Next, the user "ibmeng" is created, and the password is set for the user using chpasswd . In this demonstration, it is ibmpw
. A message is then appended to their profile, informing the user how to sudo to root. So when the engineer logs in, he is presented
with the message:
1
IBM Engineer, to access root account type: sudo -u root su -
Of course the account for ibmeng would be locked after the visit.
Sudo allows you to control who can run what commands as whom. But you must be able to understand the features of sudoers fully
to gain maximum understanding of its implications and responsibility.
Download IBM product evaluation versions and
get your hands on application development tools and middleware products from DB2®, Lotus®, Rational®, Tivoli®, and WebSphere®.
In order to use a netgroup in the sudoers file, you just need to explicitly define it as a netgroup
by using the " + " sign (instead of a " % " sign that would be used for a system group).
You will need to include this netgroup inside a User_Alias (you may want to create a new User_Alias
for this purpose)
Please check the " 3.1.2 User_Alias " section for more infos, and feel free to ask for more detailed explanation.
A certain piece of very misleading advice is often given online to users having problems
with the way certain command-line applications are displaying in their terminals. This is to suggest
that the user change the value of their TERM environment variable from within the shell,
doing something like this:
$ TERM=xterm-256color
This misinformation sometimes extends to suggesting that users put the forced TERM
change into their shell startup scripts. The reason this is such a bad idea is that it forces your
shell to assume what your terminal is, and thereby disregards the initial terminal identity string
sent by the emulator. This leads to a lot of confusion when one day you need to connect with a very
different terminal emulator.
Accounting for differences
All terminal emulators are not created equal. Certainly, not all of them are
xterm(1) , although many
other terminal emulators do a decent but not comprehensive job of copying it. The value of the
TERM environment variable is used by the system running the shell to determine what
the terminal connecting to it can and cannot do, what control codes to send to the program to use
those features, and how the shell should understand the input of certain key codes, such as the Home
and End keys. These things in particular are common causes of frustration for new users who turn
out to be using a forced TERM string.
Instead, focus on these two guidelines for setting TERM :
Avoid setting TERM from within the shell, especially in your startup
scripts like .bashrc or .bash_profile . If that ever seems like the
answer, then you are probably asking the wrong question! The terminal identification string should
always be sent by the terminal emulator you are using; if you do need to change it, then
change it in the settings for the emulator.
Always use an appropriate TERM string that accurately describes what your choice
of terminal emulator can and cannot display. Don't make an
rxvt(1) terminal identify
itself as xterm ; don't make a linux console identify itself as
vt100 ; and don't make an xterm(1) compiled without 256 color support
refer to itself as xterm-256color .
In particular, note that sometimes for compatibility reasons, the default terminal identification
used by an emulator is given as something generic like xterm , when in fact a more accurate
or comprehensive terminal identity file is more than likely available for your particular choice
of terminal emulator with a little searching.
An example that surprises a lot of people is the availability of the putty terminal
identity file, when the application defaults to presenting itself as an imperfect xterm(1)
emulator.
Configuring your emulator's string
Before you change your terminal string in its settings, check whether the default it uses is already
the correct one, with one of these:
$ echo $TERM
$ tset -q
Most builds of rxvt(1) , for example, should already use the correct TERM
string by default, such as rxvt-unicode-256color for builds with 256 colors and Unicode
support.
Where to configure which TERM string your terminal uses will vary depending on the
application. For xterm(1) , your .Xresources file should contain a definition
like the below:
XTerm*termName: xterm-256color
For rxvt(1) , the syntax is similar:
URxvt*termName: rxvt-unicode-256color
Other GTK and Qt emulators sometimes include the setting somewhere in their preferences. Look
for mentions of xterm , a common fallback default.
For Windows PuTTY, it's configurable under the "'Connections > Data"' section:
More detail about configuring PuTTY for connecting to modern systems can be found in my
article on configuring
PuTTY .
Testing your TERM string
On GNU/Linux systems, an easy way to test the terminal capabilities (particularly effects like
colors and reverse video) is using the
msgcat(1) utility:
$ msgcat --color=test
This will output a large number of tests of various features to the terminal, so that you can
check their appearance is what you expect.
Finding appropriate terminfo(5) definitions
On GNU/Linux systems, the capabilities and behavior of various terminal types is described using
terminfo(5) files,
usually installed as part of the ncurses package. These files are often installed in
/lib/terminfo or /usr/share/terminfo , in subdirectories by first letter.
In order to use a particular TERM string, an appropriate file must exist in one of
these directories. On Debian-derived systems, a large collection of terminal types can be installed
to the system with the
ncurses-term
package.
For example, the following variants of the rxvt terminal emulator are all available:
$ cd /usr/share/terminfo/r
$ ls rxvt*
rxvt-16color rxvt-256color rxvt-88color rxvt-color rxvt-cygwin
rxvt-cygwin-native rxvt+pcfkeys rxvt-unicode-256color rxvt-xpm
Private and custom terminfo(5) files
If you connect to a system that doesn't have a terminfo(5) definition to match the
TERM definition for your particular terminal, you might get a message similar to this
on login:
setterm: rxvt-unicode-256color: unknown terminal type
tput: unknown terminal "rxvt-unicode-256color"
$
If you're not able to install the appropriate terminal definition system-wide, one technique is
to use a private .terminfo directory in your home directory containing the definitions
you need:
You can copy this to your home directory on the servers you manage with a tool like scp
:
$ scp -r .terminfo server:
TERM and multiplexers
Terminal multiplexers like screen(1)
and tmux(1)
are special cases, and they cause perhaps the most confusion to people when inaccurate TERM
strings are used. The tmux
FAQ even opens by saying that most of the display problems reported by people are due to incorrect
TERM settings, and a good portion of the codebase in both multiplexers is dedicated
to negotiating the differences between terminal capacities.
This is because they are "terminals within terminals", and provide their own functionality only
within the bounds of what the outer terminal can do. In addition to this, they have their
own type for terminals within them; both of them use screen and its variants, such as
screen-256color .
It's therefore very important to check that both the outer and inner definitions
for TERM are correct. In .screenrc it usually suffices to use a line like
the following:
term screen
Or in .tmux.conf :
set-option -g default-terminal screen
If the outer terminals you use consistently have 256 color capabilities, you may choose to use
the screen-256color variant instead.
If you follow all of these guidelines, your terminal experience will be much smoother, as your
terminal and your system will understand each other that much better. You may find that this fixes
a lot of struggles with interactive tools like
vim(1) , for one thing,
because if the application is able to divine things like the available color space directly from
terminal information files, it saves you from having to include nasty hacks on the t_Co
variable in your .vimrc . Posted in
Terminal Tagged
term strings ,
terminal types
, terminfo
Posted on PuTTY is a terminal emulator with a free software license, including an SSH client.
While it has cross-platform ports, it's used most frequently on Windows systems, because they otherwise
lack a built-in terminal emulator that interoperates well with Unix-style TTY systems.
While it's very popular and useful, PuTTY's defaults are quite old, and are chosen for compatibility
reasons rather than to take advantage of all the features of a more complete terminal emulator. For
new users, this is likely an advantage as it can avoid confusion, but more advanced users who need
to use a Windows client to connect to a modern GNU/Linux system may find the defaults frustrating,
particularly when connecting to a more capable and custom-configured server.
Here are a few of the problems with the default configuration:
It identifies itself as an xterm(1) , when terminfo(5) definitions
are available named putty and putty-256color , which more precisely
define what the terminal can and cannot do, and their various custom escape sequences.
It only allows 16 colors, where most modern terminals are capable of using 256; this is partly
tied into the terminal type definition.
It doesn't use UTF-8 by default, which
should be used whenever possible
for reasons of interoperability and compatibility, and is well-supported by modern locale
definitions on GNU/Linux.
It uses Courier New, a workable but rather harsh monospace font, which should be swapped out
for something more modern if available.
It uses audible terminal bells, which tend to be annoying.
Its default palette based on xterm(1) is rather garish and harsh; softer colors
are more pleasant to read.
All of these things are fixable.
Terminal type
Usually the most important thing in getting a terminal working smoothly is to make sure it identifies
itself correctly to the machine to which it's connecting, using an appropriate $TERM
string. By default, PuTTY identifies itself as an xterm(1) terminal emulator, which
most systems will support.
However, there's a terminfo(5) definition for putty and putty-256color
available as part of ncurses , and if you have it available on your system then you
should use it, as it slightly more precisely describes the features available to PuTTY as a terminal
emulator.
You can check that you have the appropriate terminfo(5) definition installed by looking
in /usr/share/terminfo/p :
$ ls -1 /usr/share/terminfo/p/putty*
/usr/share/terminfo/p/putty
/usr/share/terminfo/p/putty-256color
/usr/share/terminfo/p/putty-sco
/usr/share/terminfo/p/putty-vt100
On Debian and Ubuntu systems, these files can be installed with:
# apt-get install ncurses-term
If you can't install the files via your system's package manager, you can also keep a private
repository of terminfo(5) files in your home directory, in a directory called
.terminfo :
$ ls -1 $HOME/.terminfo/p
putty
putty-256color
Once you have this definition installed, you can instruct PuTTY to identify with that $TERM
string in the Connection > Data section:
Here, I've used putty-256color ; if you don't need or want a 256 color terminal you
could just use putty .
Once connected, make sure that your $TERM string matches what you specified, and
hasn't been mangled by any of your shell or terminal configurations:
$ echo $TERM
putty-256color
Color space
Certain command line applications like Vim and Tmux can take advantage of
a full 256 colors
in the terminal. If you'd like to use this, set PuTTY's $TERM string to putty-256color
as outlined above, and select Allow terminal to use xterm 256-colour mode in Window > Colours
You can test this is working by using a 256 color application, or by trying out the terminal colours
directly in your shell using tput :
$ for ((color = 0; color <= 255; color++)); do
> tput setaf "$color"
> printf "test"
> done
If you see the word test in many different colors, then things are probably working.
Type reset to fix your terminal after this:
$ reset
Using UTF-8
If you're connecting to a modern GNU/Linux system, it's likely that you're using a UTF-8 locale.
You can check which one by typing locale . In my case, I'm using the en_NZ
locale with UTF-8 character encoding:
If the output of locale does show you're using a UTF-8 character encoding, then you
should configure PuTTY to interpret terminal output using that character set; it can't detect it
automatically (which isn't PuTTY's fault; it's a known hard problem). You do this in the Window >
Translation section:
While you're in this section, it's best to choose the Use Unicode line drawing code points option
as well. Line-drawing characters are most likely to work properly with this setting for UTF-8 locales
and modern fonts:
If Unicode and its various encodings is new to you, I highly recommend
Joel Spolsky's classic
article about what programmers should know about both.
Fonts
Courier New is a workable monospace font, but modern Windows systems include
Consolas , a much nicer terminal
font. You can change this in the Window > Appearance section:
There's no reason you can't use another favourite Bitmap or TrueType font instead once it's installed
on your system; DejaVu Sans Mono
, Inconsolata , and
Terminus are popular alternatives.
I personally favor Ubuntu Mono .
Bells
Terminal bells by default in PuTTY emit the system alert sound. Most people find this annoying;
some sort of visual bell tends to be much better if you want to use the bell at all. Configure this
in Terminal > Bell
Given the purpose of the alert is to draw attention to the window, I find that using a flashing
taskbar icon works well; I use this to draw my attention to my prompt being displayed after a long
task completes, or if someone mentions my name or directly messages me in irssi(1) .
Another option is using the Visual bell (flash window) option, but I personally find this even
worse than the audible bell.
Default palette
The default colours for PuTTY are rather like those used in xterm(1) , and hence
rather harsh, particularly if you're used to the slightly more subdued colorscheme of terminal emulators
like gnome-terminal(1) , or have customized your palette to something like
Solarized .
If you have decimal RGB values for the colours you'd prefer to use, you can enter those in the
Window > Colours section, making sure that Use system colours and Attempt to use logical palettes
are unchecked:
There are a few other default annoyances in PuTTY, but the above are the ones that seem to annoy
advanced users most frequently. Dag Wieers has
a similar post with a few more defaults to fix.
Time to move away from HPE Software 15 September 2016 · Filed in Opinion
If you are still using HPE Software, you should actively plan to migrate away. The recent
divestiture does not look good to me - I think existing customers are going to get soaked. Plan
your migration now.
I've said it before, that I retain a soft spot for Hewlett-Packard. They gave me my first
professional job out of university. I served my sentence doing HP OpenView consulting, and
HP-UX Administration, but still: it got me started. Once you have some professional experience,
it's much easier to move to the next role.
It saddens me to watch HP's ongoing struggles. It's sad to watch a big ship get broken up
for parts. But things had to change. They need to do something to adapt to the realities of
modern IT demands.
There was one line in the
recent announcement about divesting HPE's software assets that stood out to me:
Micro Focus expects to improve the margin on HPE's software assets by approximately 20
percentage points by the end of the third full financial year following the closing of the
transaction
It has been clear for a while that HP Software was no longer a core asset for HPE. It was
clear that it was not adapting, and was being starved for investment. Revenues have seen
decline. Smart customers have seen this coming, and have been actively migrating away from HPE
Software.
But if you're still using it, you should pay attention to that press release. How do you
think Micro Focus plans to improve margins by 20 percentage points? That's a lot of margin.
You've got three options:
Increase sales. Software development has high fixed costs, so margin improves with
additional sales.
Increase prices, collecting more money from existing customers.
Reduce investment, spending less to improve margins, and hope customers don't
notice.
This is a mature business. They will have a low percentage of new customers. Most revenue
will be coming from existing customers. It is not a growth market. So what's left? Raise
prices, and reduce investment.
If you're an existing customer, expect to see more license audits, and higher renewal
quotes. Expect to see feature stagnation.
It won't happen straight away, but it will happen. If you're still delaying that
migration, time to get a move on.
HP OM has not adapted well to modern demands. It does not deal well with VMs being deployed
at a high rate. It does not offer service monitoring capabilities. It does not offer any way to
connect to cloud provider APIs. The agents have continued to be unstable. The administrative
interface for OML/OMU looks like something I wrote over a weekend, based on a dodgy PHP
shopping cart. It does not look like a piece of software that costs tens of thousands of
dollars. Or actually maybe it does - Enterprise software in general tends to be ugly. HP didn't
even develop it themselves - they licensed the admin interface from Blue Elephant Systems . The Java GUI for OML/OMU was a
disgrace in 2002 - and it hasn't changed since.
Again, at another site where they are attempting to move to OMi ( BSM ) Just a note here.
BSM is the top tier interface through which other products flow. A crude anology would be
MicroSoft Office is the suite in which many other products like Access, Outlook, One
Note...etc...etc are pieces or parts. OMi is a piece of the comprehensive suite of tools
offered by HP Software. Just like "OpenView" was the umbrella word used for all HP tools like
OM-W, OM-U, NNM, OV...PI, TA, SI, PM and a host of other products. The jury is still out on
whether the products are viable as a management suite. One major consideration is ROI.
Problems still exist in ALL the tools, SiS does not provide capabilities or granularity
agents have. I could write or borrow scripts ( Perl, Shell, VB, Powershell ) to effectivly do
everything it does. OMi loses CI's, does not get critical mesaages forwarded, loses
communication with the agents it is supposedly managing for starters, NNMi has issues not
finding nodes that it should discover when discovery filters are configured. And I could add
a dozen other "dirty diapers" in the suite. Yet, one can see where HP is trying to go here.
IF a few of those 400 million in development dollars are thrown at the suite it could prove a
valuable suite in any IT departments arsenal.
Theoretically BSM/OMi looks like an HPOM alternative, but looking at the scalability, the
TCO, the complexity (and, and, and ...) it isn't. If you are wary about migrating to BSM, be
provocative and ask HP for a reference implementation and analyse the length and cost of the
implementation.
OMi is a little cleaner, on my last customer site it at least functioned in the 10.12
version. HP couldn't sell BSM with all the integrations like they thought. I personally know
of several large enterprises that unceremoniously dumped ALL HP products, like Data
Protector, HP-UX when the monitoring tool became an albatross around their necks as far as
implementation and complexity of BSM/OMi. So, HP has done what HP always does when they have
a major malfunction in marketing. They "REBRANDED"!!!! Seems that is what you see in
companies that go out of business in one location, then move to a secondary spot OR, better
yet they have a huge "going out of business" sale and the products never get lowered in
price, they actually mark them up, if they sell great, if not then they close for a couple of
weeks and company A opens as company B with all the same inventory at a marked up price.
Maybe not really the scenario HP is using, but close. OMi by itself without the uCMDB ( which
causes other issues when reconciliation occurs between agent based CI' and CIT and what is
found via the scripts uCMDB uses to collect data, mismatches as one sees it differently and
then the CI or CIT is removed, IF a critical system...boom...no monitoring as the policies
are gone and there is only a reference to the CI in OMi. ) but...as noted, OMi by itself
seems stable, though they are not in version 10.61...and by the way...the patch from 10.60 to
61 is flawed. But aside from the complications of TQL's, RTSM, etc...etc...it looks a whole
lot more stable.
You're right - I've seen those implementation plans, and it gets very expensive, very
quickly. You have to put in a lot of effort just into getting software installed and
integrated - none of which is of any direct value to the customer. Maybe justifiable in huge
environments, but for the rest of us? Not really.
Customers shouldn't have to pay for fixing broken integrations, they should be able to just
start using the software to solve their business problems. We're years away from reaching
that point though.
The time-based job scheduler cron(8)
has been around since Version 7 Unix, and its
crontab(5) syntax is
familiar even for people who don't do much Unix system administration. It's
standardised
, reasonably flexible, simple to configure, and works reliably, and so it's trusted by both system
packages and users to manage many important tasks.
However, like many older Unix tools, cron(8) 's simplicity has a drawback: it relies
upon the user to know some detail of how it works, and to correctly implement any other safety checking
behaviour around it. Specifically, all it does is try and run the job at an appropriate time, and
email the output. For simple and unimportant per-user jobs, that may be just fine, but for more crucial
system tasks it's worthwhile to wrap a little extra infrastructure around it and the tasks it calls.
There are a few ways to make the way you use cron(8) more robust if you're in a situation
where keeping track of the running job is desirable.
Apply the principle of least privilege
The sixth column of a system crontab(5) file is the username of the user as which
the task should run:
0 * * * * root cron-task
To the extent that is practical, you should run the task as a user with only the privileges it
needs to run, and nothing else. This can sometimes make it worthwhile to create a dedicated system
user purely for running scheduled tasks relevant to your application.
0 * * * * myappcron cron-task
This is not just for security reasons, although those are good ones; it helps protect you against
nasties like scripting errors attempting to
remove entire
system directories .
Similarly, for tasks with database systems such as MySQL, don't use the administrative root
user if you can avoid it; instead, use or even create a dedicated user with a unique random password
stored in a locked-down ~/.my.cnf file, with only the needed permissions. For a MySQL
backup task, for example, only a few permissions should be required, including SELECT
, SHOW VIEW , and LOCK TABLES .
In some cases, of course, you really will need to be root . In particularly sensitive
contexts you might even consider using sudo(8) with appropriate NOPASSWD
options, to allow the dedicated user to run only the appropriate tasks as root , and
nothing else.
Test the tasks
Before placing a task in a crontab(5) file, you should test it on the command line,
as the user configured to run the task and with the appropriate environment set. If you're going
to run the task as root , use something like su or sudo -i
to get a root shell with the user's expected environment first:
$ sudo -i -u cronuser
$ cron-task
Once the task works on the command line, place it in the crontab(5) file with the
timing settings modified to run the task a few minutes later, and then watch /var/log/syslog
with tail -f to check that the task actually runs without errors, and that the task
itself completes properly:
May 7 13:30:01 yourhost CRON[20249]: (you) CMD (cron-task)
This may seem pedantic at first, but it becomes routine very quickly, and it saves a lot of hassles
down the line as it's very easy to make an assumption about something in your environment that doesn't
actually hold in the one that cron(8) will use. It's also a necessary acid test to make
sure that your crontab(5) file is well-formed, as some implementations of cron(8)
will refuse to load the entire file if one of the lines is malformed.
If necessary, you can set arbitrary environment variables for the tasks at the top of the file:
MYVAR=myvalue
0 * * * * you cron-task
Don't throw away errors or useful output
You've probably seen tutorials on the web where in order to keep the crontab(5) job
from sending standard output and/or standard error emails every five minutes, shell redirection operators
are included at the end of the job specification to discard both the standard output and standard
error. This kluge is particularly common for running web development tasks by automating a request
to a URL with curl(1)
or wget(1) :
Ignoring the output completely is generally not a good idea, because unless you have other tasks
or monitoring ensuring the job does its work, you won't notice problems (or know what they are),
when the job emits output or errors that you actually care about.
In the case of curl(1) , there are just way too many things that could go wrong,
that you might notice far too late:
The script could get broken and return 500 errors.
The URL of the cron.php task could change, and someone could forget to add a
HTTP 301 redirect.
Even if a HTTP 301 redirect is added, if you don't use -L or --location
for curl(1) , it won't follow it.
The client could get blacklisted, firewalled, or otherwise impeded by automatic or manual
processes that falsely flag the request as spam.
If using HTTPS, connectivity could break due to cipher or protocol mismatch.
The author has seen all of the above happen, in some cases very frequently.
As a general policy, it's worth taking the time to read the manual page of the task you're calling,
and to look for ways to correctly control its output so that it emits only the output you actually
want. In the case of curl(1) , for example, I've found the following formula works well:
curl -fLsS -o /dev/null http://example.com/
-f : If the HTTP response code is an error, emit an error message rather than
the 404 page.
-L : If there's an HTTP 301 redirect given, try to follow it.
-sS : Don't show progress meter ( -S stops -s from
also blocking error messages).
-o /dev/null : Send the standard output (the actual page returned) to /dev/null
.
This way, the curl(1) request should stay silent if everything is well, per the old
Unix philosophy Rule of Silence
.
You may not agree with some of the choices above; you might think it important to e.g. log the
complete output of the returned page, or to fail rather than silently accept a 301 redirect, or you
might prefer to use wget(1) . The point is that you take the time to understand in more
depth what the called program will actually emit under what circumstances, and make it match your
requirements as closely as possible, rather than blindly discarding all the output and (worse) the
errors. Work with Murphy's law
; assume that anything that can go wrong eventually will.
Send the output somewhere useful
Another common mistake is failing to set a useful MAILTO at the top of the
crontab(5) file, as the specified destination for any output and errors from the tasks.
cron(8) uses the system mail implementation to send its messages, and typically, default
configurations for mail agents will simply send the message to an mbox file in
/var/mail/$USER , that they may not ever read. This defeats much of the point of mailing output
and errors.
This is easily dealt with, though; ensure that you can send a message to an address you actually
do check from the server, perhaps using mail(1) :
Once you've verified that your mail agent is correctly configured and that the mail arrives in
your inbox, set the address in a MAILTO variable at the top of your file:
[email protected]
0 * * * * you cron-task-1
*/5 * * * * you cron-task-2
If you don't want to use email for routine output, another method that works is sending the output
to syslog with a tool like
logger(1) :
0 * * * * you cron-task | logger -it cron-task
Alternatively, you can configure aliases on your system to forward system mail destined for you
on to an address you check. For Postfix, you'd use an
aliases(5) file.
I sometimes use this setup in cases where the task is expected to emit a few lines of output which
might be useful for later review, but send stderr output via MAILTO as
normal. If you'd rather not use syslog , perhaps because the output is high in volume
and/or frequency, you can always set up a log file /var/log/cron-task.log but don't
forget to add a logrotate(8)
rule for it!
Put the tasks in their own shell script file
Ideally, the commands in your crontab(5) definitions should only be a few words,
in one or two commands. If the command is running off the screen, it's likely too long to be in the
crontab(5) file, and you should instead put it into its own script. This is a particularly
good idea if you want to reliably use features of bash or some other shell besides POSIX/Bourne
/bin/sh for your commands, or even a scripting language like Awk or Perl; by default,
cron(8) uses the system's /bin/sh implementation for parsing the commands.
Because crontab(5) files don't allow multi-line commands, and have other gotchas
like the need to escape percent signs % with backslashes, keeping as much configuration
out of the actual crontab(5) file as you can is generally a good idea.
If you're running cron(8) tasks as a non-system user, and can't add scripts into
a system bindir like /usr/local/bin , a tidy method is to start your own, and include
a reference to it as part of your PATH . I favour ~/.local/bin , and have
seen references to ~/bin as well. Save the script in ~/.local/bin/cron-task
, make it executable with chmod +x , and include the directory in the PATH
environment definition at the top of the file:
PATH=/home/you/.local/bin:/usr/local/bin:/usr/bin:/bin
[email protected]
0 * * * * you cron-task
Having your own directory with custom scripts for your own purposes has a host of other benefits,
but that's another article
Avoid /etc/crontab
If your implementation of cron(8) supports it, rather than having an /etc/crontab
file a mile long, you can put tasks into separate files in /etc/cron.d :
$ ls /etc/cron.d
system-a
system-b
raid-maint
This approach allows you to group the configuration files meaningfully, so that you and other
administrators can find the appropriate tasks more easily; it also allows you to make some files
editable by some users and not others, and reduces the chance of edit conflicts. Using sudoedit(8)
helps here too. Another advantage is that it works better with version control; if I start collecting
more than a few of these task files or to update them more often than every few months, I start a
Git repository to track them:
If you're editing a crontab(5) file for tasks related only to the individual user,
use the crontab(1) tool; you can edit your own crontab(5) by typing
crontab -e , which will open your $EDITOR to edit a temporary file that
will be installed on exit. This will save the files into a dedicated directory, which on my system
is /var/spool/cron/crontabs .
On the systems maintained by the author, it's quite normal for /etc/crontab never
to change from its packaged template.
Include a timeout
cron(8) will normally allow a task to run indefinitely, so if this is not desirable,
you should consider either using options of the program you're calling to implement a timeout, or
including one in the script. If there's no option for the command itself, the
timeout(1) command
wrapper in coreutils is one possible way of implementing this:
cron(8) will start a new process regardless of whether its previous runs have completed,
so if you wish to avoid locking for long-running task, on GNU/Linux you could use the
flock(1) wrapper for
the flock(2) system call
to set an exclusive lockfile, in order to prevent the task from running more than one instance in
parallel.
0 * * * * you flock -nx /var/lock/cron-task cron-task
Greg's wiki has some more in-depth discussion of the
file locking problem for scripts
in a general sense, including important information about the caveats of "rolling your own" when
flock(1) is not available.
If it's important that your tasks run in a certain order, consider whether it's necessary to have
them in separate tasks at all; it may be easier to guarantee they're run sequentially by collecting
them in a single shell script.
Do something useful with exit statuses
If your cron(8) task or commands within its script exit non-zero, it can be useful
to run commands that handle the failure appropriately, including cleanup of appropriate resources,
and sending information to monitoring tools about the current status of the job. If you're using
Nagios Core or one of its derivatives, you could consider using send_nsca to send passive
checks reporting the status of jobs to your monitoring server. I've written
a simple script called
nscaw to do this for me:
0 * * * * you nscaw CRON_TASK -- cron-task
Consider alternatives to cron(8)
If your machine isn't always on and your task doesn't need to run at a specific time, but rather
needs to run once daily or weekly, you can install
anacron and drop scripts
into the cron.hourly , cron.daily , cron.monthly , and
cron.weekly directories in /etc , as appropriate. Note that on Debian and
Ubuntu GNU/Linux systems, the default /etc/crontab contains hooks that run these, but
they run only if anacron(8)
is not installed.
If you're using cron(8) to poll a directory for changes and run a script if there
are such changes, on GNU/Linux you could consider using a daemon based on inotifywait(1)
instead.
Finally, if you require more advanced control over when and how your task runs than cron(8)
can provide, you could perhaps consider writing a daemon to run on the server consistently and fork
processes for its task. This would allow running a task more often than once a minute, as an example.
Don't get too bogged down into thinking that cron(8) is your only option for any kind
of asynchronous task management!
Using ls is probably one of the first commands an administrator
will learn for getting a simple list of the contents of the directory. Most
administrators will also know about the -a and -l
switches, to show all files including dot files and to show more detailed data
about files in columns, respectively.
There are other switches to GNU ls which are less frequently used,
some of which turn out to be very useful for programming:
-t - List files in order of last modification date, newest
first. This is useful for very large directories when you want to get a quick
list of the most recent files changed, maybe piped through head or
sed 10q. Probably most useful combined with -l. If
you want the oldest files, you can add -r to reverse the
list.
-X - Group files by extension; handy for polyglot code, to
group header files and source files separately, or to separate source files
from directories or build files.
-v - Naturally sort version numbers in filenames.
-S - Sort by filesize.
-R - List files recursively. This one is good combined with
-l and piped through a pager like less.
Since the listing is text like anything else, you could, for example, pipe the
output of this command into a vim process, so you could add
explanations of what each file is for and save it as an inventory
file or add it to a README:
$ ls -XR | vim -
This kind of stuff can even be automated by make with a little
work, which I'll cover in another article later in the series.
A more flexible method for defining custom commands for an interactive shell (or within a script)
is to use a shell function. We could declare our ll function in a Bash startup file
as a function instead of an alias like so:
# Shortcut to call ls(1) with the -l flag
ll() {
command ls -l "$@"
}
Note the use of the command builtin here to specify that the ll function
should invoke the program named ls , and not any function named
ls . This is particularly important when writing a function wrapper around a command,
to stop an infinite loop where the function calls itself indefinitely:
# Always add -q to invocations of gdb(1)
gdb() {
command gdb -q "$@"
}
In both examples, note also the use of the "$@" expansion, to add to the final command
line any arguments given to the function. We wrap it in double quotes to stop spaces and other shell
metacharacters in the arguments causing problems. This means that the ll command will
work correctly if you were to pass it further options and/or one or more directories as arguments:
$ ll -a
$ ll ~/.config
Shell functions declared in this way are specified by POSIX for Bourne-style shells, so they should
work in your shell of choice, including Bash, dash , Korn shell, and Zsh. They can also
be used within scripts, allowing you to abstract away multiple instances of similar commands to improve
the clarity of your script, in much the same way the basics of functions work in general-purpose
programming languages.
Functions are a good and portable way to approach adding features to your interactive shell; written
carefully, they even allow you to port features you might like from other shells into your shell
of choice. I'm fond of taking commands I like from Korn shell or Zsh and implementing them in Bash
or POSIX shell functions, such as Zsh's
vared or its
two-argument
cd features.
If you end up writing a lot of shell functions, you should consider putting them into
separate configuration
subfiles to keep your shell's primary startup file from becoming unmanageably large.
Examples from the author
You can take a look at some of the shell functions I have defined here that are useful to me in
general shell usage; a lot of these amount to implementing convenience features that I wish my shell
had, especially for quick directory navigation, or adding options to commands:
You can manipulate variables within shell functions, too:
# Print the filename of a path, stripping off its leading path and
# extension
fn() {
name=$1
name=${name##*/}
name=${name%.*}
printf '%s\n' "$name"
}
This works fine, but the catch is that after the function is done, the value for name
will still be defined in the shell, and will overwrite whatever was in there previously:
This may be desirable if you actually want the function to change some aspect of your current
shell session, such as managing variables or changing the working directory. If you don't
want that, you will probably want to find some means of avoiding name collisions in your variables.
If your function is only for use with a shell that provides the local (Bash) or
typeset (Ksh) features, you can declare the variable as local to the function to remove
its global scope, to prevent this happening:
# Bash-like
fn() {
local name
name=$1
name=${name##*/}
name=${name%.*}
printf '%s\n' "$name"
}
# Ksh-like
# Note different syntax for first line
function fn {
typeset name
name=$1
name=${name##*/}
name=${name%.*}
printf '%s\n' "$name"
}
If you're using a shell that lacks these features, or you want to aim for POSIX compatibility,
things are a little trickier, since local function variables aren't specified by the standard. One
option is to use a subshell , so
that the variables are only defined for the duration of the function:
# POSIX; note we're using plain parentheses rather than curly brackets, for
# a subshell
fn() (
name=$1
name=${name##*/}
name=${name%.*}
printf '%s\n' "$name"
)
# POSIX; alternative approach using command substitution:
fn() {
printf '%s\n' "$(
name=$1
name=${name##*/}
name=${name%.*}
printf %s "$name"
)"
}
This subshell method also allows you to change directory with cd within a function
without changing the working directory of the user's interactive shell, or to change shell options
with set or Bash options with shopt only temporarily for the purposes of
the function.
Another method to deal with variables is to manipulate the
positional parameters directly ( $1 , $2 ) with set ,
since they are local to the function call too:
# POSIX; using positional parameters
fn() {
set -- "${1##*/}"
set -- "${1%.*}"
printf '%s\n' "$1"
}
These methods work well, and can sometimes even be combined, but they're awkward to write, and
harder to read than the modern shell versions. If you only need your functions to work with your
modern shell, I recommend just using local or typeset . The Bash Guide
on Greg's Wiki has a
very thorough
breakdown of functions in Bash, if you want to read about this and other aspects of functions
in more detail.
Keeping functions for later
As you get comfortable with defining and using functions during an interactive session, you might
define them in ad-hoc ways on the command line for calling in a loop or some other similar circumstance,
just to solve a task in that moment.
As an example, I recently made an ad-hoc function called monit to run a set of commands
for its hostname argument that together established different types of monitoring system checks,
using an existing script called nmfs :
$ monit() { nmfs "$1" Ping Y ; nmfs "$1" HTTP Y ; nmfs "$1" SNMP Y ; }
$ for host in webhost{1..10} ; do
> monit "$host"
> done
After that task was done, I realized I was likely to use the monit command interactively
again, so I decided to keep it. Shell functions only last as long as the current shell, so if you
want to make them permanent, you need to store their definitions somewhere in your startup files.
If you're using Bash, and you're content to just add things to the end of your ~/.bashrc
file, you could just do something like this:
$ declare -f monit >> ~/.bashrc
That would append the existing definition of monit in parseable form to your
~/.bashrc file, and the monit function would then be loaded and available
to you for future interactive sessions. Later on, I ended up converting monit into a
shell script, as its use wasn't limited to just an interactive shell.
If you want a more robust approach to keeping functions like this for Bash permanently, I wrote
a tool called Bashkeep , which allows you to quickly store functions and variables defined in
your current shell into separate and appropriately-named files, including viewing and managing the
list of names conveniently:
How can I see the content of a log file in real time in Linux? Well there are a lot of utilities
out there that can help a user to output the content of a file while the file is changing or continuously
updating. Some of the most known and heavily used utility to display a file content in real time
in Linux is the
tail command (manage files effectively).
As said, tail command is the most common solution to display a log file in real time. However,
the command to display the file has two versions, as illustrated in the below examples.
In the first example the command tail needs the -f argument to follow the content
of a file.
$ sudo tail -f /var/log/apache2/access.log
Monitor Apache Logs in Real Time
The second version of the command is actually a command itself: tailf . You won't need to use
the -f switch because the command is built-in with the -f argument.
$ sudo tailf /var/log/apache2/access.log
Real Time Apache Logs Monitoring
Usually, the log files are rotated frequently on a Linux server by the logrotate utility. To watch
log files that get rotated on a daily base you can use the -F flag to tail command
The tail -F will keep track if new log file being created and will start following
the new file instead of the old file.
$ sudo tail -F /var/log/apache2/access.log
However, by default, tail command will display the last 10 lines of a file. For instance, if you
want to watch in real time only the last two lines of the log file, use the -n file
combined with the -f flag, as shown in the below example.
$ sudo tail -n2 -f /var/log/apache2/access.log
Watch Last Two Lines of Logs 2. Multitail Command – Monitor Multiple Log Files in Real Time
Another interesting command to display log files in real time is
multitail command
. The name of the command implies that multitail utility can monitor and keep track of multiple files
in real time. Multitail also lets you navigate back and forth in the monitored file.
To install mulitail utility in Debian and RedHat based systems issue the below command.
Multitail Monitor Logs 3. lnav Command – Monitor Multiple Log Files in Real Time
Another interesting command, similar to multitail command is the
lnav
command . Lnav utility can also watch and follow multiple files and display their content in
real time.
To install lnav utility in Debian and RedHat based Linux distributions by issuing the below command.
lnav – Real Time Logs Monitoring 4. less Command – Display Real Time Output of Log Files
Finally, you can display the live output of a file with
less
command if you type Shift+F .
As with tail utility , pressing Shift+F in a opened file in less will start following
the end of the file. Alternatively, you can also start less with less +F flag to enter
to live watching of the file.
$ sudo less +F /var/log/apache2/access.log
Watch Logs Using Less Command
That's It! You may read these following articles on Log monitoring and management.
The file tool gives you a one-line summary of what kind of file you're looking at,
based on its extension, headers and other cues. This is very handy used with find when
examining a set of unfamiliar files:
$ find . -exec file {} +
.: directory
./hanoi: Perl script, ASCII text executable
./.hanoi.swp: Vim swap file, version 7.3
./factorial: Perl script, ASCII text executable
./bits.c: C source, ASCII text
./bits: ELF 32-bit LSB executable, Intel 80386, version ...
Oftentimes you may
wish to start a process on the Bash shell without having to wait for it to actually complete,
but still be notified when it does. Similarly, it may be helpful to temporarily stop a task
while it's running without actually quitting it, so that you can do other things with the
terminal. For these kinds of tasks, Bash's built-in job control is very useful.
Backgrounding processes
If you have a process that you expect to take a long time, such as a long cp or
scp operation, you can start it in the background of your current shell by adding
an ampersand to it as a suffix:
$ cp -r /mnt/bigdir /home &
[1] 2305
This will start the copy operation as a child process of your bash instance,
but will return you to the prompt to enter any other commands you might want to run while
that's going.
The output from this command shown above gives both the job number of 1, and the process ID
of the new task, 2305. You can view the list of jobs for the current shell with the builtin
jobs :
$ jobs
[1]+ Running cp -r /mnt/bigdir /home &
If the job finishes or otherwise terminates while it's backgrounded, you should see a
message in the terminal the next time you update it with a newline:
[1]+ Done cp -r /mnt/bigdir /home &
Foregrounding processes
If you want to return a job in the background to the foreground, you can type
fg :
$ fg
cp -r /mnt/bigdir /home &
If you have more than one job backgrounded, you should specify the particular job to bring
to the foreground with a parameter to fg :
$ fg %1
In this case, for shorthand, you can optionally omit fg and it will work just
the same:
$ %1
Suspending processes
To temporarily suspend a process, you can press Ctrl+Z:
You can then continue it in the foreground or background with fg %1 or bg
%1 respectively, as above.
This is particularly useful while in a text editor; instead of quitting the editor to get
back to a shell, or dropping into a subshell from it, you can suspend it temporarily and return
to it with fg once you're ready.
Dealing with output
While a job is running in the background, it may still print its standard output and
standard error streams to your terminal. You can head this off by redirecting both streams to
/dev/null for verbose commands:
$ cp -rv /mnt/bigdir /home &>/dev/null
However, if the output of the task is actually of interest to you, this may be a case where
you should fire up another terminal emulator, perhaps in GNU Screen or tmux , rather than using simple job control.
Suspending SSH
sessions
As a special case, you can suspend an SSH session using an SSH escape sequence . Type a
newline followed by a ~ character, and finally press Ctrl+Z to background your SSH session and
return to the terminal from which you invoked it.
For many system
administrators, Awk is used only as a way to print specific columns of data from programs that
generate columnar output, such as netstat or ps .
For example, to get
a list of all the IP addresses and ports with open TCP connections on a machine, one might run
the following:
# netstat -ant | awk '{print $5}'
This works pretty well, but among the data you actually wanted it also includes the fifth
word of the opening explanatory note, and the heading of the fifth column:
and
Address
0.0.0.0:*
205.188.17.70:443
172.20.0.236:5222
72.14.203.125:5222
There are varying ways to deal with this.
Matching patterns
One common way is to pipe the output further through a call to grep , perhaps
to only include results with at least one number:
# netstat -ant | awk '{print $5}' | grep '[0-9]'
In this case, it's instructive to use the awk call a bit more intelligently by
setting a regular expression which the applicable line must match in order for that field to be
printed, with the standard / characters as delimiters. This eliminates the need
for the call to grep :
# netstat -ant | awk '/[0-9]/ {print $5}'
We can further refine this by ensuring that the regular expression should only match data in
the fifth column of the output, using the ~ operator:
# netstat -ant | awk '$5 ~ /[0-9]/ {print $5}'
Skipping lines
Another approach you could take to strip the headers out might be to use sed to
skip the first two lines of the output:
# netstat -ant | awk '{print $5}' | sed 1,2d
However, this can also be incorporated into the awk call, using the
NR variable and making it part of a conditional checking the line number is
greater than two:
# netstat -ant | awk 'NR>2 {print $5}'
Combining and excluding patterns
Another common idiom on systems that don't have the special pgrep command is to
filter ps output for a string, but exclude the grep process itself
from the output with grep -v grep :
If you're using Awk to get columnar data from the output, in this case the second column
containing the process ID, both calls to grep can instead be incorporated into the
awk call:
# ps -ef | awk '/apache/ && !/awk/ {print $2}'
Again, this can be further refined if necessary to ensure you're only matching the
expressions against the command name by specifying the field number for each comparison:
If you're used to using Awk purely as a column filter, the above might help to increase its
utility for you and allow you to write shorter and more efficient command lines. The Awk Primer on Wikibooks is a
really good reference for using Awk to its fullest for the sorts of tasks for which it's
especially well-suited.
Nagios is useful
for monitoring pretty much any kind of network service, with a wide variety of community-made
plugins to test pretty much anything you might need. However, its configuration and interface
can be a little bit cryptic to initiates. Fortunately, Nagios is well-packaged in Debian and
Ubuntu and provides a basic default configuration that is instructive to read and extend.
There's a reason that a lot of system administrators turn into monitoring fanatics when
tools like Nagios are available. The rapid feedback of things going wrong and being fixed and
the pleasant sea of green when all your services are up can get addictive for any halfway
dedicated administrator.
In this article I'll walk you through installing a very simple monitoring setup on a Debian
or Ubuntu server. We'll assume you have two computers in your home network, a workstation on
192.168.1.1 and a server on 192.168.1.2 , and that you maintain a web
service of some sort on a remote server, for which I'll use www.example.com .
We'll install a Nagios instance on the server that monitors both local services and the remote
webserver, and emails you if it detects any problems.
For those not running a Debian-based GNU/Linux distribution or perhaps BSD, much of the
configuration here will still apply, but the initial setup will probably be peculiar to your
ports or packaging system unless you're compiling from source.
Installing the
packages
We'll work on a freshly installed Debian Stable box as the server, which at the time of
writing is version 6.0.3 "Squeeze". If you don't have it working already, you should start by
installing Apache HTTPD:
# apt-get install apache2
Visit the server on http://192.168.1.1/ and check that you get the "It works!",
and that should be all you need. Note that by default this installation of Apache is not
terribly secure, so you shouldn't allow access to it from outside your private network until
you've locked it down a bit, which is outside the scope of this article.
Next we'll install the nagios3 package, which will include a default set of
useful plugins, and a simple configuration. The list of packages it needs to support these is
quite long so you may need to install a lot of dependencies, which apt-get will
manage for you.
# apt-get install nagios3
The installation procedure will include requesting a password for the administration area;
provide it with a suitable one. You may also get prompted to configure a workgroup for the
samba-common package; don't worry, you aren't installing a samba
service by doing this, it's just information for the smbclient program in case you
want to monitor any SMB/CIFS services.
That should provide you with a basic self-monitoring Nagios setup. Visit
http://192.168.1.1/nagios3/ in your browser to verify this; use the username
nagiosadmin and the password you gave during the install process. If you see
something like the below, you're in business; this is the Nagios web reporting and
administration panel.
The Nagios administration area's front page Default setup
To start with, click the Services link in the left menu. You should see something like the
below, which is the monitoring for localhost and the service monitoring that the
packager set up for you by default:
Default Nagios monitoring hosts and services
Note that on my system, monitoring for the already-existing HTTP and SSH daemons was
automatically set up for me, along with the default checks for load average, user count, and
process count. If any of these pass a threshold, they'll turn yellow for WARNING, and red for
CRITICAL states.
This is already somewhat useful, though a server monitoring itself is a bit problematic
because of course it won't be able to tell you if it goes completely down. So for the next
step, we're going to set up monitoring for the remote host www.example.com , which
means firing up your favourite text editor to
edit a few configuration files.
Default configuration
Nagios configuration is at first blush a bit complex, because monitoring setups need to be
quite finely-tuned in order to be useful long term, particularly if you're managing a large
number of hosts. Take a look at the files in /etc/nagios3/conf.d .
You can actually arrange a Nagios configuration any way you like, including one big
well-ordered file, but it makes some sense to break it up into sections if you can. In this
case, the default setup includes the following files:
contacts_nagios2.cfg defines the people and groups of people who should
receive notifications and alerts when Nagios detects problems or resolutions.
extinfo_nagios2.cfg makes some miscellaneous enhancements to other
configurations, kept in a separate file for clarity.
generic-host_nagios2.cfg is Debian's host template, defining a few common
variables that you're likely to want for most hosts, saving you repeating yourself when
defining host definitions.
generic-service_nagios2.cfg is the same idea, but it's a template service to
monitor.
hostgroups_nagios2.cfg defines groups of hosts in case it's valuable for you
to monitor individual groups of hosts, which the Nagios admin allows you to do.
localhost_nagios2.cfg is where the monitoring for the localhost
host we were just looking at is defined.
services_nagios2.cfg is where further services are defined that might be
applied to groups.
timeperiods_nagios2.cfg defines periods of time for monitoring services; for
example, you might want to get paged if a webserver dies 24/7, but you might not care as much
about 5% packet loss on some international link at 2am on Saturday morning.
This isn't my favourite method of organising Nagios configuration, but it'll work fine for
us. We'll start by defining a remote host, and add services to it.
Testing services
First of all, let's check we actually have connectivity to the host we're monitoring from
this server for both of the services we intend to check; ICMP ECHO (PING) and HTTP.
$ ping -n -c 1 www.example.com
PING www.example.com (192.0.43.10) 56(84) bytes of data.
64 bytes from 192.0.43.10: icmp_req=1 ttl=243 time=168 ms
--- www.example.com ping statistics --- 1 packets transmitted, 1 received,
0% packet loss, time 0ms rtt min/avg/max/mdev = 168.700/168.700/168.700/0.000 ms
$ wget www.example.com -O - | grep -i found
tom@novus:~$ wget www.example.com -O -
--2012-01-26 21:12:00-- http://www.example.com/
Resolving www.example.com... 192.0.43.10, 2001:500:88:200::10
Connecting to www.example.com|192.0.43.10|:80... connected.
HTTP request sent, awaiting response... 302 Found
...
All looks well, so we'll go ahead and add the host and its services.
Defining the
remote host
Write a new file in the /etc/nagios3/conf.d directory called
www.example.com_nagios2.cfg , with the following contents:
define host {
use generic-host
host_name www.example.com
address www.example.com
}
The first stanza of localhost_nagios2.conf looks very similar to this, indeed,
it uses the same host template, generic-host . All we need to do is define what to
call the host, and where to find it.
However, in order to get it monitoring appropriate services, we might need to add it to one
of the already existing groups. Open up hostgroups_nagios2.cfg , and look for the
stanza that includes hostgroup_name http-servers . Add
www.example.com to the group's members, so that that stanza looks like this:
# A list of your web servers
define hostgroup {
hostgroup_name http-servers
alias HTTP servers
members localhost,www.example.com
}
With this done, you need to restart the Nagios process:
# service nagios3 restart
If that succeeds, you should notice under your Hosts and Services section is a new host
called "www.example.com", and it's being monitored for HTTP. At first, it'll be PENDING, but
when the scheduled check runs, it should come back (hopefully!) as OK.
For tools
like diff that work with multiple files as parameters, it can be useful to work
with not just files on the filesystem, but also potentially with the output of arbitrary
commands. Say, for example, you wanted to compare the output of ps and ps
-e with diff -u . An obvious way to do this is to write files to compare
the output:
This works just fine, but Bash provides a shortcut in the form of process
substitution , allowing you to treat the standard output of commands as files. This is
done with the <() and >() operators. In our case, we want to
direct the standard output of two commands into place as files:
$ diff -u <(ps) <(ps -e)
This is functionally equivalent, except it's a little tidier because it doesn't leave files
lying around. This is also very handy for elegantly comparing files across servers, using
ssh :
$ diff -u .bashrc <(ssh remote cat .bashrc)
Conversely, you can also use the >() operator to direct from a filename
context to the standard input of a command. This is handy for setting up in-place
filters for things like logs. In the following example, I'm making a call to rsync
, specifying that it should make a log of its actions in log.txt , but filter it
through grep -vF .tmp first to remove anything matching the fixed string
.tmp :
Combined with tee this syntax is a way of simulating multiple filters for a
stdout stream, transforming output from a command in as many ways as you see
fit:
In general, the idea is that wherever on the command line you could specify a file to be
read from or written to, you can instead use this syntax to make an implicit named pipe for the
text stream.
Thanks to Reddit user Rhomboid for pointing out an incorrect assertion about this syntax
necessarily abstractingmkfifocalls, which I've since removed.
With judicious use of tricks like pipes, redirects, and process substitution in modern shells, it's
very often possible to avoid using temporary files, doing everything inline and keeping them quite
neat. However when manipulating a lot of data into various formats you do find yourself occasionally
needing a temporary file, just to hold data temporarily.
A common way to deal with this is to create a temporary file in your home directory, with some
arbitrary name, something like test or working :
$ ps -ef >~/test
If you want to save the information indefinitely for later use, this makes sense, although it
would be better to give it a slightly more instructive name than just test .
If you really only needed the data temporarily, however, you're much better to use the temporary
files directory. This is usually /tmp , but for good practice's sake it's better to
check the value of TMPDIR first, and only use /tmp as a default:
$ ps -ef >"${TMPDIR:-/tmp}"/test
This is getting better, but there is still a significant problem: there's no built-in check that
the test file doesn't already exist, perhaps being used by some other user or program,
particularly another running instance of the same script.
To that end, we have the mktemp
program, which creates an empty temporary file in the appropriate directory for you without overwriting
anything, and prints the filename it created. This allows you to use the file inline in both shell
scripts and one-liners, and is much safer than specifying hardcoded paths:
On GNU/Linux systems, files of a sufficient age in TMPDIR are cleared on boot (controlled
in /etc/default/rcS on Debian-derived systems, /etc/cron.daily/tmpwatch
on Red Hat ones), making /tmp useful as a general scratchpad as well as for a kind of
relatively reliable inter-process communication without cluttering up users' home directories.
In some cases, there may be additional advantages in using /tmp for its designed
purpose as some administrators choose to mount it as a tmpfs filesystem, so it operates
in RAM and works very quickly. It's also common practice to set the noexec flag on the
mount to prevent malicious users from executing any code they manage to find or save in the directory.
"... One of my favourite technical presentations I've read online has been Hal Pomeranz's Unix Command-Line Kung Fu , a catalogue of shortcuts and efficient methods of doing very clever things with the Bash shell. None of these are grand arcane secrets, but they're things that are often forgotten in the course of daily admin work, when you find yourself typing something you needn't, or pressing up repeatedly to find something you wrote for which you could simply search your command history. ..."
One of my favourite
technical presentations I've read online has been Hal Pomeranz's Unix Command-Line Kung
Fu , a catalogue of shortcuts and efficient methods of doing very clever things with the
Bash shell. None of these are grand arcane secrets, but they're things that are often forgotten
in the course of daily admin work, when you find yourself typing something you needn't, or
pressing up repeatedly to find something you wrote for which you could simply search your
command history.
I highly recommend reading the whole thing, as I think even the most experienced shell users
will find there are useful tidbits in there that would make their lives easier and their time
with the shell more productive, beyond simpler things like tab completion.
Here, I'll recap two
of the things I thought were the most simple and useful items in the presentation for general
shell usage, and see if I can add a little value to them with reference to the Bash
manual.
History with Ctrl+R
For many shell users, finding a command in history means either pressing the up arrow key
repeatedly, or perhaps piping a history call through grep . It turns
out there's a much nicer way to do this, using Bash's built-in history searching functionality;
if you press Ctrl+R and start typing a search pattern, the most recent command matching that
pattern will automatically be inserted on your current line, at which point you can adapt it as
you need, or simply press Enter to run it again. You can keep pressing Ctrl+R to move further
back in your history to the next-most recent match. On my shell, if I search through my history
for git , I can pull up what I typed for a previous commit:
This functionality isn't actually exclusive to Bash; you can establish a history search
function in quite a few tools that use GNU Readline, including the MySQL client command
line.
You can search forward through history in the same way with Ctrl+S, but it's likely you'll
have to fix up a couple of terminal annoyances first.
Additionally, if like me you're a Vim user and you don't really like having to reach for the
arrow keys, or if you're on a terminal where those keys are broken for whatever reason, you can
browse back and forth within your command history with Ctrl+P (previous) and Ctrl+N (next).
These are just a few of the Emacs-style shortcuts that GNU Readline provides; check here for a more complete
list .
Repeating commands with !!
The last command you ran in Bash can be abbreviated on the next line with two exclamation
marks:
$ echo "Testing."
Testing.
$ !!
Testing.
You can use this to simply repeat a command over and over again, although for that you
really should be using watch , but more interestingly it turns out
this is very handy for building complex pipes in stages. Suppose you were building a pipeline
to digest some data generated from a program like netstat , perhaps to determine
the top 10 IP addresses that are holding open the most connections to a server. You might be
able to build a pipeline like this:
Similarly, you can repeat the last argument from the previous command line using
!$ , which is useful if you're doing a set of operations on one file, such as
checking it out via RCS, editing it, and checking it back in:
$ co -l file.txt
$ vim !$
$ ci -u !$
Or if you happen to want to work on a set of arguments, you can repeat all of the
arguments from the previous command using !* :
$ touch a.txt b.txt c.txt
$ rm !*
When you remember to user these three together, they can save you a lot of typing, and will
really increase your accuracy because you won't be at risk of mistyping any of the commands or
arguments. Naturally, however, it pays to be careful what you're running through
rm !
When you have some
spare time, something instructive to do that can help fill gaps in your Unix knowledge and to
get a better idea of the programs installed on your system and what they can do is a simple
whatis call, run
over all the executable files in your /bin and /usr/bin directories.
This will give you a one-line summary of the file's function if available from man pages.
tom@conan:/bin$ whatis *
bash (1) - GNU Bourne-Again SHell
bunzip2 (1) - a block-sorting file compressor, v1.0.4
busybox (1) - The Swiss Army Knife of Embedded Linux
bzcat (1) - decompresses files to stdout
...
tom@conan:/usr/bin$ whatis *
[ (1) - check file types and compare values
2to3 (1) - Python2 to Python3 converter
2to3-2.7 (1) - Python2 to Python3 converter
411toppm (1) - convert Sony Mavica .411 image to ppm
...
It also works on many of the files in other directories, such as /etc :
tom@conan:/etc$ whatis *
acpi (1) - Shows battery status and other ACPI information
adduser.conf (5) - configuration file for adduser(8) and addgroup(8)
adjtime (3) - correct the time to synchronize the system clock
aliases (5) - Postfix local alias database format
...
Because packages often install more than one binary and you're only in the habit of using
one or two of them, this process can tell you about programs on your system that you may have
missed, particularly standard tools that solve common problems. As an example, I first learned
about watch this
way, having used a clunky solution with for loops with sleep calls to
do the same thing many times before.
In Bash
scripting (and shell scripting in general), we often want to check the exit value of a command
to decide an action to take after it completes, likely for the purpose of error handling. For
example, to determine whether a particular regular expression regex was present
somewhere in a file options , we might apply grep(1) with its POSIX
-q option to suppress output and just use the exit value:
grep -q regex options
An approach sometimes taken is then to test the exit value with the $?
parameter, using if to check if it's non-zero, which is not very elegant and a bit
hard to read:
# Bad practice
grep -q regex options
if (($? > 0)); then
printf '%s\n' 'myscript: Pattern not found!' >&2
exit 1
fi
Because the if construct by design
tests the exit value of commands , it's better to test the command directly ,
making the expansion of $? unnecessary:
# Better
if grep -q regex options; then
# Do nothing
:
else
printf '%s\n' 'myscript: Pattern not found!\n' >&2
exit 1
fi
We can precede the command to be tested with ! to negate the test as
well, to prevent us having to use else as well:
# Best
if ! grep -q regex options; then
printf '%s\n' 'myscript: Pattern not found!' >&2
exit 1
fi
An alternative syntax is to use && and || to perform
if and else tests with grouped commands between braces, but these
tend to be harder to read:
# Alternative
grep -q regex options || {
printf '%s\n' 'myscript: Pattern not found!' >&2
exit 1
}
With this syntax, the two commands in the block are only executed if the
grep(1) call exits with a non-zero status. We can apply &&
instead to execute commands if it does exit with zero.
That syntax can be convenient for quickly short-circuiting failures in scripts, for example
due to nonexistent commands, particularly if the command being tested already outputs its own
error message. This therefore cuts the script off if the given command fails, likely due to
ffmpeg(1) being unavailable on the system:
hash ffmpeg || exit 1
Note that the braces for a grouped command are not needed here, as there's only one command
to be run in case of failure, the exit call.
Calls to cd are another good use case here, as running a script in the wrong
directory if a call to cd fails could have really nasty effects:
cd wherever || exit 1
In general, you'll probably only want to test $? when you have
specific non-zero error conditions to catch. For example, if we were using the
--max-delete option for rsync(1) , we could check a call's return
value to see whether rsync(1) hit the threshold for deleted file count and write a
message to a logfile appropriately:
rsync --archive --delete --max-delete=5 source destination
if (($? == 25)); then
printf '%s\n' 'Deletion limit was reached' >"$logfile"
fi
It may be tempting to use the errexit feature in the hopes of stopping a script
as soon as it encounters any error, but there are some problems with its usage that make it a bit
error-prone. It's generally more straightforward to simply write your own error handling using
the methods above.
For a really thorough breakdown of dealing with conditionals in Bash, take a look at the
relevant chapter of the Bash Guide .
"... Note that we unset the config variable after we're done, otherwise it'll be in the namespace of our shell where we don't need it. You may also wish to check for the existence of the ~/.bashrc.d directory, check there's at least one matching file inside it, or check that the file is readable before attempting to source it, depending on your preference. ..."
"... Thanks to commenter oylenshpeegul for correcting the syntax of the loops. ..."
Large shell startup scripts ( .bashrc , .profile ) over about fifty
lines or so with a lot of options, aliases, custom functions, and similar tweaks can get cumbersome
to manage over time, and if you keep your dotfiles under version control it's not terribly helpful
to see large sets of commits just editing the one file when it could be more instructive if broken
up into files by section.
Given that shell configuration is just shell code, we can apply the source builtin
(or the . builtin for POSIX sh ) to load several files at the end of a
.bashrc , for example:
This is a better approach, but it still binds us into using those filenames; we still have to
edit the ~/.bashrc file if we want to rename them, or remove them, or add new ones.
Fortunately, UNIX-like systems have a common convention for this, the .d directory
suffix, in which sections of configuration can be stored to be read by a main configuration file
dynamically. In our case, we can create a new directory ~/.bashrc.d :
$ ls ~/.bashrc.d
options.bash
aliases.bash
functions.bash
With a slightly more advanced snippet at the end of ~/.bashrc , we can then load
every file with the suffix .bash in this directory:
# Load any supplementary scripts
for config in "$HOME"/.bashrc.d/*.bash ; do
source "$config"
done
unset -v config
Note that we unset the config variable after we're done, otherwise it'll be in the
namespace of our shell where we don't need it. You may also wish to check for the existence of the
~/.bashrc.d directory, check there's at least one matching file inside it, or check
that the file is readable before attempting to source it, depending on your preference.
The same method can be applied with .profile to load all scripts with the suffix
.sh in ~/.profile.d , if we want to write in POSIX sh , with
some slightly different syntax:
# Load any supplementary scripts
for config in "$HOME"/.profile.d/*.sh ; do
. "$config"
done
unset -v config
Another advantage of this method is that if you have your dotfiles under version control, you
can arrange to add extra snippets on a per-machine basis unversioned, without having to update your
.bashrc file.
Here's my implementation of the above method, for both .bashrc and .profile
:
If you need to search a set of log files in /var/log , some of which have been compressed
with gzip as part of the
logrotate procedure,
it can be a pain to deflate them to check them for a specific string, particularly where you want
to include the current log which isn't compressed:
It turns out to be a little more elegant to use the -c switch for gzip
to deflate the files in-place and write the content of the files to standard output, concatenating
any uncompressed files you may also want to search in with
cat :
$ gzip -dc log.*.gz | cat - log | grep pattern
This and similar operations with compressed files are common enough problems that short scripts
in /bin on GNU/Linux systems exist, providing analogues to existing tools that can work
with files in both a compressed and uncompressed state. In this case, the
zgrep tool is of the most
use to us:
$ zgrep pattern log*
Note that this search will also include the uncompressed log file and search it normally.
The tools are for possibly compressed files, which makes them particularly well-suited to
searching and manipulating logs in mixed compression states. It's worth noting that most of these
are actually reasonably simple shell scripts.
The complete list of tools, most of which do the same thing as their z-less equivalents, can be
gleaned with a quick whatis call:
$ pwd
/bin
$ whatis z*
zcat (1) - compress or expand files
zcmp (1) - compare compressed files
zdiff (1) - compare compressed files
zegrep (1) - search possibly compressed files for a regular expression
zfgrep (1) - search possibly compressed files for a regular expression
zforce (1) - force a '.gz' extension on all gzip files
zgrep (1) - search possibly compressed files for a regular expression
zless (1) - file perusal filter for crt viewing of compressed text
zmore (1) - file perusal filter for crt viewing of compressed text
znew (1) - recompress .Z files to .gz files
"... Personnel turnover in Indian firms is sky high. As soon as software engineers finish taking part in a project, they jot the reference on their CV, and rush to find another project, in a different area, to extend their skill set, beef up their CV and improve their chances of a higher salary in the IT market. ..."
"... The consequence is that Indian IT firms in charge of the outsourced projects/products just cannot rely upon the implicit knowledge within the heads of their employees. In a sense, they cannot afford to have "key personnel", experienced people who know important, undocumented aspects of a piece of software and can be queried to clear up things -- all employees must be interchangeable. Hence the strict reliance on well-documented processes. ..."
"... Outsourcing your core competencies or your competitive advantage -- that's the real beauty of outsourcing! What could go wrong? ..."
I've seen a couple of BPOs, Business Process Outsourcing deals.
The key for success of BPO in the short term is to define the process -- document every step
of the process of having something done and then introduce control-functions to ensure that the
process is being followed. Possibly also develop some tools in supporting the process.
If the process is understood and documented well -- so well that rare/expensive skill is no
longer needed to follow the process -- then it is possible to look for the lowest possible cost
employee to follow the process.
As far as I can tell the most common mistake in BPO deals is that the process being outsourced
isn't understood well. The documentation tends to be extensive but if the understanding is lacking
then the process might be providing different results than wished for. Key Performance Indicators
(KPIs) are introduced and then the gaming of the KPIs is begun .
Even if the initial process was understood well and documented well then the next problem is that
due to distance (provider to client) there might be difficulties in adapting the process to changing
circumstances.
And yes, there are similarities in BPOs and automation. Understanding of the process is key,
without understanding of the process then the end result is usually bad. The key to learning and
understanding is often humility and humility is often (in my experience) lacking in executives,
senior management and project managers involved in BPO deals and/or efficiency projects.
See, you put it right on "the process is not understood well". My point is, in many companies
it's questionable whether the process can even be ever understood well, unless you have significant
in-company knowledge, which makes outsourcing a key risk, even in absence of anything else.
Yup,you got it -- Business Process Outsourcing. I've seen the ill-understood processes ruined
when, e.g., software development was transferred to India. I saw this starting in 2000 up til
the present day. Yankee management LOVED the idea of cheap labor, but never got back the software
it originally intended and designed.
It was the culture: Yankees are software cowboys -- able to change project as needed; Indians
loved the process of development. The Indians sounded good but never go the job done.
In the 1990s, I was quite impressed that the first company to reach a CMM level 5 was from
India (a subsidiary from IBM, if I remember correctly) -- and thereafter seeing Indian software
firms achieving ISO 9000/CMM compliance before large Western corporations.
Later, I worked in several projects that were partly outsourced/externalized to India (the
usual suspects like HCL or Wipro), and I understood. Personnel turnover in Indian firms is
sky high. As soon as software engineers finish taking part in a project, they jot the reference
on their CV, and rush to find another project, in a different area, to extend their skill set,
beef up their CV and improve their chances of a higher salary in the IT market.
Remaining in one domain area, with one set of technologies, is not considered a good thing
for advancement in the Indian IT market, or when trying to get directly hired by a Western firm.
They often have to support an extended family that paid for their computer science studies, so
fast career moves are really important for them.
The consequence is that Indian IT firms in charge of the outsourced projects/products just
cannot rely upon the implicit knowledge within the heads of their employees. In a sense, they
cannot afford to have "key personnel", experienced people who know important, undocumented aspects
of a piece of software and can be queried to clear up things -- all employees must be interchangeable.
Hence the strict reliance on well-documented processes.
To expand on that I'd say that interchangeable employees have limited or no bargaining power
leading to it being easier to keep salaries low. What is left for the interchangeable employee
to do to increase earnings? Yep, change jobs leading to more focus on making employees interchangeable
.
The game (war) between the company and its employees escalates. Power is everything and all CEOs
know that you don't get paid what you're worth -- you're paid what you negotiate. Maintaining
power is worth the cost of churn.
Pity the "build or buy" decision calculus has been perverted beyond what the firm needs as
inputs into its final market ready products, but is increasingly being used as a defensive move
by big companies to kill off competition from smaller firms via knock off products or "acqui-hiring"
of talent.
Aqui-hiring aka acquiring the smaller firm, pretending to integrate its product into the big
company's product line, starving the product of resources to slowly kill it off, then pulling
the plug citing "dissapointing sales and take up in the market" to protect big company's market
share
Then redeploying the acqui-hired "talent" (I.e. founders of the acquired firm) to work on the
next generation of big company's products (except now they do so in a bureaucratic, red tape laden
maze of "corporate innovation management" processes).
I thought one would outsource the core competitive disadvantages. That is, a smaller firm would
outsource (buy) when they could not competitively create a subassembly/subcomponent because the
sourcing firm had successfully achieved superior economies of scale (EoS) . This is why multiple
automobile manufacturers purchase their subcomponents (say, coils or sparkplugs or bearings) from
a supplier instead of manufacturing them in-house as the supplier achieves superior EoS by supplying
the entire industry.
Even commenter Larry's above example ("offload liability risk with our larger insurance policy")
is an EoS advantage/disadvantage, no?
Problems occur when one side of the dance is dominated by one or two very large players (think
WalMart or Takata) or political will (defined here as $) is involved.
When Corporate America started offshoring R&D, scientist jobs, engineering jobs, programming
jobs, medical jobs, legal jobs, etc., etc., etc. beginning in the late 1970s, but exploding under
Jack Welch at GE in 1984-1985 [and I was offered a position helping in the process -- so nobody
dare contradict me] it simply exacerbates those offshored manufacturing jobs, for without them
in the past, too many American inventors would never have come to fruition -- this of course requires
some knowledge of the history of technology.
The one absolute in human nature and human commerce: the greater the inequality, the lower
the innovation -- IN EVERYTHING, IN EVERY AREA!
In other words, the greatest innovation in America (and everywhere else throughout history)
took place when this nation was at its lowest in inequality indices and closest to socialism:
the 1950s to 1960s and early 1970s -- and almost everything has simply been incremental since
then.
As Leonardo da Vinci once remarked:
" Realize that everything connects to everything else. "
In other words, the greatest innovation in America (and everywhere else throughout history)
took place when this nation was at its lowest in inequality indices and closest to socialism:
the 1950s to 1960s and early 1970s
I disagree with this statement and would ask you to provide specific references for such a
sweeping claim.
and almost everything has simply been incremental since then.
And would argue, with diagrams on a chalkboard if necessary, that all human knowledge is incremental.
At least, that which requires more than simple immediate sensory perception.
"... An earlier version of this post suggested changing the TERM definition in .bashrc , which is generally not a good idea, even if bounded with conditionals as my example was. You should always set the terminal string in the emulator itself if possible, if you do it at all. ..."
"... Similarly, to use 256 colours in GNU Screen, add the following to your .screenrc : ..."
Using 256 colours
in terminals is well-supported in GNU/Linux distributions these days, and also in Windows
terminal emulators like PuTTY. Using 256 colours is great for Vim colorschemes in particular,
but also very useful for Tmux colouring or any other terminal application where a slightly
wider colour space might be valuable. Be warned that once you get this going reliably, there's
no going back if you spend a lot of time in the terminal. Xterm
To set this up for xterm or emulators that use xterm as the
default value for $TERM , such as xfce4-terminal or
gnome-terminal , it generally suffices to check the options for your terminal
emulator to ensure that it will allow 256 colors, and then use the TERM stringxterm-256color for it.
An earlier version of this post suggested changing the TERM definition in
.bashrc , which is generally not a good idea, even if bounded with conditionals as
my example was. You should always set the terminal string in the emulator itself if possible,
if you do it at all.
Be aware that older systems may not have terminfo definitions for this
terminal, but you can always copy them in using a private .terminfo directory if
need be.
Tmux
To use 256 colours in Tmux, you should set the default terminal in .tmux.conf
to be screen-256color :
set -g default-terminal "screen-256color"
This will allow you to use color definitions like colour231 in your status
lines and other configurations. Again, this particular terminfo definition may not
be present on older systems, so you should copy it into
~/.terminfo/s/screen-256color on those systems if you want to use it
everywhere.
GNU Screen
Similarly, to use 256 colours in GNU Screen, add the following to your
.screenrc :
term screen-256color
Vim
With the applicable options from the above set, you should not need to change anything in
Vim to be able to use 256-color colorschemes. If you're wanting to write or update your own
256-colour compatible scheme, it should either begin with set t_Co=256 , or more
elegantly, check the value of the corresponding option value is &t_Co is 256
before trying to use any of the extra colour set.
By default, the
Bash shell keeps the history of your most recent session in the .bash_history
file, and the commands you've issued in your current session are also available with a
history call. These defaults are useful for keeping track of what you've been up
to in the shell on any given machine, but with disks much larger and faster than they were when
Bash was designed, a little tweaking in your .bashrc file can record history more
permanently, consistently, and usefully. Append history instead of rewriting it
You should start by setting the histappend option, which will mean that when
you close a session, your history will be appended to the .bash_history
file rather than overwriting what's in there.
shopt -s histappend
Allow a larger history file
The default maximum number of commands saved into the .bash_history file is a
rather meager 500. If you want to keep history further back than a few weeks or so, you may as
well bump this up by explicitly setting $HISTSIZE to a much larger number in your
.bashrc . We can do the same thing with the $HISTFILESIZE
variable.
HISTFILESIZE=1000000
HISTSIZE=1000000
The man page for Bash says that HISTFILESIZE can be
unset to stop truncation entirely, but unfortunately this doesn't work in
.bashrc files due to the order in which variables are set; it's therefore more
straightforward to simply set it to a very large number.
If you're on a machine with resource constraints, it might be a good idea to occasionally
archive old .bash_history files to speed up login and reduce memory
footprint.
Don't store specific lines
You can prevent commands that start with a space from going into history by setting
$HISTCONTROL to ignorespace . You can also ignore duplicate commands,
for example repeated du calls to watch a file grow, by adding
ignoredups . There's a shorthand to set both in ignoreboth .
HISTCONTROL=ignoreboth
You might also want to remove the use of certain commands from your history, whether for
privacy or readability reasons. This can be done with the $HISTIGNORE variable.
It's common to use this to exclude ls calls, job control builtins like
bg and fg , and calls to history itself:
HISTIGNORE='ls:bg:fg:history'
Record timestamps
If you set $HISTTIMEFORMAT to something useful, Bash will record the timestamp
of each command in its history. In this variable you can specify the format in which you want
this timestamp displayed when viewed with history . I find the full date and time
to be useful, because it can be sorted easily and works well with tools like cut
and awk .
HISTTIMEFORMAT='%F %T '
Use one command per line
To make your .bash_history file a little easier to parse, you can force
commands that you entered on more than one line to be adjusted to fit on only one with the
cmdhist option:
shopt -s cmdhist
Store history immediately
By default, Bash only records a session to the .bash_history file on disk when
the session terminates. This means that if you crash or your session terminates improperly, you
lose the history up to that point. You can fix this by recording each line of history as you
issue it, through the $PROMPT_COMMAND variable:
Setting the Bash
option histexpand allows some convenient typing shortcuts using Bash history
expansion . The option can be set with either of these:
$ set -H
$ set -o histexpand
It's likely that this option is already set for all interactive shells, as it's on by
default. The manual, man bash , describes these features as follows:
-H Enable ! style history substitution. This option is on
by default when the shell is interactive.
You may have come across this before, perhaps to your annoyance, in the following error
message that comes up whenever ! is used in a double-quoted string, or without
being escaped with a backslash:
$ echo "Hi, this is Tom!"
bash: !": event not found
If you don't want the feature and thereby make ! into a normal character, it
can be disabled with either of these:
$ set +H
$ set +o histexpand
History expansion is actually a very old feature of shells, having been available in
csh before Bash usage became common.
This article is a good followup to Better Bash history , which among
other things explains how to include dates and times in history output, as these
examples do.
Basic history expansion
Perhaps the best known and most useful of these expansions is using !! to refer
to the previous command. This allows repeating commands quickly, perhaps to monitor the
progress of a long process, such as disk space being freed while deleting a large file:
$ rm big_file &
[1] 23608
$ du -sh .
3.9G .
$ !!
du -sh .
3.3G .
It can also be useful to specify the full filesystem path to programs that aren't in your
$PATH :
$ hdparm
-bash: hdparm: command not found
$ /sbin/!!
/sbin/hdparm
In each case, note that the command itself is printed as expanded, and then run to print the
output on the following line.
History by absolute index
However, !! is actually a specific example of a more general form of history
expansion. For example, you can supply the history item number of a specific command to repeat
it, after looking it up with history :
$ history | grep expand
3951 2012-08-16 15:58:53 set -o histexpand
$ !3951
set -o histexpand
You needn't enter the !3951 on a line by itself; it can be included as any part
of the command, for example to add a prefix like sudo :
$ sudo !3850
If you include the escape string \! as part of your Bash prompt , you can include the current
command number in the prompt before the command, making repeating commands by index a lot
easier as long as they're still visible on the screen.
History by relative index
It's also possible to refer to commands relative to the current command. To
subtitute the second-to-last command, we can type !-2 . For example, to check
whether truncating a file with sed worked correctly:
This works further back into history, with !-3 , !-4 , and so
on.
Expanding for historical arguments
In each of the above cases, we're substituting for the whole command line. There are also
ways to get specific tokens, or words , from the command if we want that. To get the
first argument of a particular command in the history, use the !^
token:
$ touch a.txt b.txt c.txt
$ ls !^
ls a.txt
a.txt
To get the last argument, add !$ :
$ touch a.txt b.txt c.txt
$ ls !$
ls c.txt
c.txt
To get all arguments (but not the command itself), use !* :
$ touch a.txt b.txt c.txt
$ ls !*
ls a.txt b.txt c.txt
a.txt b.txt c.txt
This last one is particularly handy when performing several operations on a group of files;
we could run du and wc over them to get their size and character
count, and then perhaps decide to delete them based on the output:
More generally, you can use the syntax !n:w to refer to any specific argument
in a history item by number. In this case, the first word, usually a command or builtin, is
word 0 :
$ history | grep bash
4073 2012-08-16 20:24:53 man bash
$ !4073:0
man
What manual page do you want?
$ !4073:1
bash
You can even select ranges of words by separating their indices with a hyphen:
If you want to match any part of the command line, not just the start, you can use
!?string? :
$ !?bash?
man bash
Be careful when using these, if you use them at all. By default it will run the most recent
command matching the string immediately , with no prompting, so it might be a problem
if it doesn't match the command you expect.
Checking history expansions before
running
If you're paranoid about this, Bash allows you to audit the command as expanded before you
enter it, with the histverify option:
This option works for any history expansion, and may be a good choice for more cautious
administrators. It's a good thing to add to one's .bashrc if so.
If you don't need this set all the time, but you do have reservations at some point about
running a history command, you can arrange to print the command without running it by adding a
:p suffix:
$ !rm:p
rm important-file
In this instance, the command was expanded, but thankfully not actually
run.
Substituting strings in history expansions
To get really in-depth, you can also perform substitutions on arbitrary commands from the
history with !!:gs/pattern/replacement/ . This is getting pretty baroque even for
Bash, but it's possible you may find it useful at some point:
$ !!:gs/txt/mp3/
rm a.mp3 b.mp3 c.mp3
If you only want to replace the first occurrence, you can omit the g :
$ !!:s/txt/mp3/
rm a.mp3 b.txt c.txt
Stripping leading directories or trailing files
If you want to chop a filename off a long argument to work with the directory, you can do
this by adding an :h suffix, kind of like a dirname call in Perl:
$ du -sh /home/tom/work/doc.txt
$ cd !$:h
cd /home/tom/work
To do the opposite, like a basename call in Perl, use :t :
$ ls /home/tom/work/doc.txt
$ document=!$:t
document=doc.txt
Stripping extensions or base names
A bit more esoteric, but still possibly useful; to strip a file's extension, use
:r :
$ vi /home/tom/work/doc.txt
$ stripext=!$:r
stripext=/home/tom/work/doc
To do the opposite, to get only the extension, use :e :
$ vi /home/tom/work/doc.txt
$ extonly=!$:e
extonly=.txt
Quoting history
If you're performing substitution not to execute a command or fragment but to use it as a
string, it's likely you'll want to quote it. For example, if you've just found through
experiment and trial and error an ideal ffmpeg command line to accomplish some
task, you might want to save it for later use by writing it to a script:
In this case, this will prevent Bash from executing the command expansion "$(date ...
)" , instead writing it literally to the file as desired. If you build a lot of complex
commands interactively that you later write to scripts once completed, this feature is really
helpful and saves a lot of cutting and pasting.
Thanks to commenter Mihai Maruseac for pointing out a bug in the examples.
"... If you're using Bash version 4.0 or above ( bash --version ), you can save a bit of terminal
space by setting the PROMPT_DIRTRIM variable for the shell. This limits the length of the tail end of
the \w and \W expansions to that number of path elements: ..."
The common default of some variant of \h:\w\$ for a
Bash promptPS1
string includes the \w escape character, so that the user's current working directory
appears in the prompt, but with $HOME shortened to a tilde:
This is normally very helpful, particularly if you leave your shell for a time and forget where
you are, though of course you can always call the pwd shell builtin. However it can
get annoying for very deep directory hierarchies, particularly if you're using a smaller terminal
window:
If you're using Bash version 4.0 or above ( bash --version ), you can save a
bit of terminal space by setting the PROMPT_DIRTRIM variable for the shell. This limits
the length of the tail end of the \w and \W expansions to that number of
path elements:
This is a good thing to include in your ~/.bashrc file if you often find yourself
deep in directory trees where the upper end of the hierarchy isn't of immediate interest to you.
You can remove the effect again by unsetting the variable:
If you pass -1 as the process ID argument to either the
kill shell command or the
kill C function , then the signal is sent to all the processes it can reach, which
in practice means all the processes of the user running the kill command or syscall.
pkill - ... signal processes based on name and other attributes
-u, --euid euid,...
Only match processes whose effective user ID is listed.
Either the numerical or symbolical value may be used.
-U, --uid uid,...
Only match processes whose real user ID is listed. Either the
numerical or symbolical value may be used.
-u, --user
Kill only processes the specified user owns. Command names
are optional.
I think, any utility used to find process in Linux/Solaris style /proc (procfs) will use
full list of processes (doing some readdir of /proc ). I think, they will
iterate over /proc digital subfolders and check every found process for
match.
To get list of users, use getpwent
(it will get one user per call).
skill (procps & procps-ng)
and killall (psmisc)
tools both uses getpwnam library call
to parse argument of -u option, and only username will be parsed.
pkill (procps & procps-ng)
uses both atol and getpwnam to parse -u / -U argument and allow
both numeric and textual user specifier.
; ,Aug 4, 2011 at 10:11
pkill is not obsolete. It may be unportable outside Linux, but the question was about Linux
specifically. – Lars Wirzenius
Aug 4 '11 at 10:11
Trap syntax is very simple and easy to understand: first we must call the trap builtin, followed
by the action(s) to be executed, then we must specify the signal(s) we want to react to:
trap [-lp] [[arg] sigspec]
Let's see what the possible trap options are for.
When used with the -l flag, the trap command will just display a list of signals
associated with their numbers. It's the same output you can obtain running the kill -l
command:
It's really important to specify that it's possible to react only to signals which allows the script
to respond: the SIGKILL and SIGSTOP signals cannot be caught, blocked or
ignored.
Apart from signals, traps can also react to some pseudo-signal such as EXIT, ERR
or DEBUG, but we will see them in detail later. For now just remember that a signal can be specified
either by its number or by its name, even without the SIG prefix.
About the -p option now. This option has sense only when a command is not provided
(otherwise it will produce an error). When trap is used with it, a list of the previously set traps
will be displayed. If the signal name or number is specified, only the trap set for that specific
signal will be displayed, otherwise no distinctions will be made, and all the traps will be displayed:
$ trap 'echo "SIGINT caught!"' SIGINT
We set a trap to catch the SIGINT signal: it will just display the "SIGINT caught" message onscreen
when given signal will be received by the shell. If we now use trap with the -p option, it will display
the trap we just defined:
$ trap -p
trap -- 'echo "SIGINT caught!"' SIGINT
By the way, the trap is now "active", so if we send a SIGINT signal, either using the kill command,
or with the CTRL-c shortcut, the associated command in the trap will be executed (^C is just printed
because of the key combination):
^CSIGINT caught!
Trap in action We now will write a simple script to show trap in action, here it is:
#!/usr/bin/env bash
#
# A simple script to demonstrate how trap works
#
set -e
set -u
set -o pipefail
trap 'echo "signal caught, cleaning..."; rm -i linux_tarball.tar.xz' SIGINT SIGTERM
echo "Downloading tarball..."
wget -O linux_tarball.tar.xz https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.13.5.tar.xz &> /dev/null
The above script just tries to download the latest linux kernel tarball into the directory from what
it is launched using wget . During the task, if the SIGINT or SIGTERM signals are received
(notice how you can specify more than one signal on the same line), the partially downloaded file
will be deleted.
In this case the command are actually two: the first is the echo which prints the
message onscreen, and the second is the actual rm command (we provided the -i option
to it, so it will ask user confirmation before removing), and they are separated by a semicolon.
Instead of specifying commands this way, you can also call functions: this would give you more re-usability.
Notice that if you don't provide any command the signal(s) will just be ignored!
This is the output of the script above when it receives a SIGINT signal:
A very important thing to remember is that when a script is terminated by a signal, like above, its
exist status will be the result of 128 + the signal number . As you can see, the script
above, being terminated by a SIGINT, has an exit status of 130 :
$ echo $?
130
Lastly, you can disable a trap just by calling trap followed by the - sign,
followed by the signal(s) name or number:
trap - SIGINT SIGTERM
The signals will take back the value they had upon the entrance to shell. Pseudo-signals As
already mentioned above, trap can be set not only for signals which allows the script to respond
but also to what we can call "pseudo-signals". They are not technically signals, but correspond to
certain situations that can be specified: EXIT When EXIT is specified in a trap,
the command of the trap will be execute on exit from the shell. ERR This will cause the argument
of the trap to be executed when a command returns a non-zero exit status, with some exceptions (the
same of the shell errexit option): the command must not be part of a while or
until loop; it must not be part of an if construct, nor part of a &&
or || list, and its value must not be inverted by using the ! operator.
DEBUG This will cause the argument of the trap to be executed before every simple command,
for , case or select commands, and before the first command
in shell functions RETURN The argument of the trap is executed after a function or a script
sourced by using source or the . command.
"... Backquotes ( ` ` ) are old-style form of command substitution, with some differences: in this form, backslash retains its literal meaning except when followed by $ , ` , or \ , and the first backquote not preceded by a backslash terminates the command substitution; whereas in the $( ) form, all characters between the parentheses make up the command, none are treated specially. ..."
"... Double square brackets delimit a Conditional Expression. And, I find the following to be a good reading on the subject: "(IBM) Demystify test, [, [[, ((, and if-then-else" ..."
What you've written actually almost works (it would work if all the variables were numbers), but
it's not an idiomatic way at all.
( ) parentheses indicate a
subshell . What's inside them isn't an expression like in many other languages. It's a
list of commands (just like outside parentheses). These commands are executed in a separate
subprocess, so any redirection, assignment, etc. performed inside the parentheses has no effect
outside the parentheses.
With a leading dollar sign, $( ) is a
command substitution : there is a command inside the parentheses, and the output from
the command is used as part of the command line (after extra expansions unless the substitution
is between double quotes, but that's
another story ).
{ } braces are like parentheses in that they group commands, but they only
influence parsing, not grouping. The program x=2; { x=4; }; echo $x prints 4,
whereas x=2; (x=4); echo $x prints 2. (Also braces require spaces around them
and a semicolon before closing, whereas parentheses don't. That's just a syntax quirk.)
With a leading dollar sign, ${VAR} is a
parameter expansion , expanding to the value of a variable, with possible extra transformations.
(( )) double parentheses surround an
arithmetic instruction , that is, a computation on integers, with a syntax resembling other
programming languages. This syntax is mostly used for assignments and in conditionals.
The same syntax is used in arithmetic expressions $(( )) , which expand
to the integer value of the expression.
[[ ]] double brackets surround
conditional expressions . Conditional expressions are mostly built on
operators such as -n $variable to test if a variable is empty and -e
$file to test if a file exists. There are also string equality operators: "$string1"
= "$string2" (beware that the right-hand side is a pattern, e.g. [[ $foo = a*
]] tests if $foo starts with a while [[ $foo = "a*"
]] tests if $foo is exactly a* ), and the familiar !
, && and || operators for negation, conjunction and disjunction as
well as parentheses for grouping.
Note that you need a space around each operator (e.g. [[ "$x" = "$y" ]]
, not [[ "$x"="$y" ]] ), and a space or a character like ;
both inside and outside the brackets (e.g. [[ -n $foo ]] , not [[-n
$foo]] ).
[ ] single brackets are an alternate form of conditional expressions with
more quirks (but older and more portable). Don't write any for now; start worrying about them
when you find scripts that contain them.
This is the idiomatic way to write your test in bash:
if [[ $varA = 1 && ($varB = "t1" || $varC = "t2") ]]; then
If you need portability to other shells, this would be the way (note the additional quoting
and the separate sets of brackets around each individual test):
+1 @WillSheppard for yr reminder of proper style. Gilles, don't you need a semicolon after yr
closing curly bracket and before "then" ? I always thought if , then
, else and fi could not be on the same line... As in:
Backquotes ( ` ` ) are old-style form of command substitution, with some differences:
in this form, backslash retains its literal meaning except when followed by $ ,
` , or \ , and the first backquote not preceded by a backslash terminates
the command substitution; whereas in the $( ) form, all characters between the parentheses
make up the command, none are treated specially.
You could emphasize that single brackets have completely different semantics inside and outside
of double brackets. (Because you start with explicitly pointing out the subshell semantics but
then only as an aside mention the grouping semantics as part of conditional expressions. Was confusing
to me for a second when I looked at your idiomatic example.) –
Peter A. Schneider
Aug 28 at 13:16
Just to be sure: The quoting in 't1' is unnecessary, right? Because as opposed to arithmetic instructions
in double parentheses, where t1 would be a variable, t1 in a conditional expression in double
brackets is just a literal string.
"... ...and if you weren't targeting a known/fixed operating system, using case rather than a regex match is very much the better practice, since the accepted answer depends on behavior POSIX doesn't define. ..."
"... Regular expression syntax, including the use of backquoting, is different for different tools. Always look it up. ..."
As an aside, if you were using bash for this, the preferred alternative would be the
=~ operator in [[ ]] , ie. [[ Unauthenticated123 =~
^(Unauthenticated|Authenticated) ]] – Charles DuffyDec
14 '15 at 18:22
...and if you weren't targeting a known/fixed operating system, using case
rather than a regex match is very much the better practice, since the accepted answer depends
on behavior POSIX doesn't define. – Charles DuffyDec
14 '15 at 18:25
expr match Unauthenticated123 'Unauthenticated\|Authenticated'
If you want the number of characters matched.
To have the part of the string (Unauthenticated) returned use:
expr match Unauthenticated123 '\(Unauthenticated\|Authenticated\)'
From info coreutils 'expr invocation' :
STRING : REGEX' Perform pattern matching. The arguments are converted to strings
and the second is considered to be a (basic, a la GNU grep') regular expression,
with a `^' implicitly prepended. The first argument is then matched against this regular
expression.
If the match succeeds and REGEX uses `\(' and `\)', the `:'
expression returns the part of STRING that matched the
subexpression; otherwise, it returns the number of characters
matched.
If the match fails, the `:' operator returns the null string if
`\(' and `\)' are used in REGEX, otherwise 0.
Only the first `\( ... \)' pair is relevant to the return value;
additional pairs are meaningful only for grouping the regular
expression operators.
In the regular expression, `\+', `\?', and `\|' are operators
which respectively match one or more, zero or one, or separate
alternatives. SunOS and other `expr''s treat these as regular
characters. (POSIX allows either behavior.) *Note Regular
Expression Library: (regex)Top, for details of regular expression
syntax. Some examples are in *note Examples of expr::.
Note that both match and \| are GNU extensions (and the behaviour
for : (the match standard equivalent) when the pattern starts with
^ varies with implementations). Standardly, you'd do:
The leading space is to avoid problems with values of $string that start with
- or are expr operators, but that means it adds one to the number
of characters being matched.
The + forces $string to be taken as a string even if it happens
to be a expr operator. expr regular expressions are basic regular
expressions which don't have an alternation operator (and where | is not
special). The GNU implementation has it as \| though as an extension.
If all you want is to check whether $string starts with
Authenticated or Unauthenticated , you'd better use:
case $string in
(Authenticated* | Unauthenticated*) do-something
esac
@mikeserv, match and \| are GNU extensions anyway. This Q&A
seems to be about GNU expr anyway (where ^ is guaranteed to mean
match at the beginning of the string ). – Stéphane
ChazelasDec
14 '15 at 14:34
@StéphaneChazelas - i didn't know they were strictly GNU. i think i remember them
being explicitly officially unspecified - but i don't use expr too often
anyway and didn't know that. thank you. – mikeservDec
14 '15 at 14:49
It's not "strictly GNU" - it's present in a number of historical implementations (even System
V had it, undocumented, though it didn't have the others like substr/length/index), which is
why it's explicitly unspecified. I can't find anything about \| being an
extension. – Random832Dec
14 '15 at 16:13
Opens another terminal window at the current location.
Use Case
I often cd into a directory and decide it would be useful to open another terminal in
the same folder, maybe for an editor or something. Previously, I would open the terminal
and repeat the CD command.
I have aliased this command to open so I just type open and I get a new
terminal already in my desired folder.
The & disown part of the command stops the new terminal from being
dependant on the first meaning that you can still use the first and if you close the
first, the second will remain open. Limitations
It relied on you having the $TERMINAL global variable set. If you don't have this set
you could easily change it to something like the following:
While the original one-liner is indeed IMHO the canonical way to loop over numbers,
the brace expansion syntax of Bash 4.x has some kick-ass features such as correct padding
of the number with leading zeros. Limitations
This is similar to seq , but portable. seq does not
exist in all systems and is not recommended today anymore. Other variations to
emulate various uses with seq :
# seq 1 2 10
for ((i=1; i<=10; i+=2)); do echo $i; done
# seq -w 5 10
for ((i=5; i<=10; ++i)); do printf '%02d\n' $i; done
The -i parameter is to edit the file in-place. Limitations
This works as posted in GNU sed . In BSD sed , the
-i flag requires a parameter to use as the suffix of a backup file. You can
set it to empty to not use a backup file:
Chasing recent fad and risking the organization assets (systems, processes, people, reputations) for the sake of advancing your
goals is a clear-cut characteristic of a broken ecosystem.
The change madness is getting
worse with every passing year .
The demands for change being placed on corporate IT are
plain ridiculous. As a consequence we are breaking IT. In pursuit of absurd project commitments
we are eating
ourselves .
And the hysteria reaches fever pitch as people extrapolate trends into the future
linearly or worse still exponentially. This is such bad scientific thinking
that it shouldn't be worthy of debate, but the power of critical thought is a scarce
resource
A broken management and governance system, a broken value system, and a broken
culture.
But even in the best and healthiest organisations, there are plenty of rogues; psychopaths
(and milder sociopaths) who are never going to care about anyone but themselves. They soar in
management (and they're drawn to the power); they look good to all measures and controls
except a robust risk management system - it is the last line of defense.
...I'm saying there is a real limit to how fast humans can change: how fast we can change our
behaviours, our attitudes, our processes, our systems. We need to accept that the technology
is changing faster than society, our IT sector, our organisations, our teams, ourselves can
change.
I'm saying there is a social and business backlash already to the pace of change. We're
standing in the ruins of an economy that embraced fast change.
I'm saying there are real risks to the pace of change, and we currently live in a culture
that thinks writing risks down means you can then ignore them, or that if you can't ignore
them you can always hedge them somehow.
We have to slow down a bit. perhaps "Slow IT" is the wrong name but it was catchy. I'm not
saying go slooooow. We've somehow sustained a pretty impressive pace for decades. But clearly
it can't go much faster, if at all, and all these demands that it must go faster are plain
silly. It just can't. There's bits falling off, people burning out, smoking shells of
projects everywhere.
I'm not saying stop, but I am saying ease off a little, calm down, stop panicking, stop
this desperate headlong rush. You are right Simon that mindfulness is a key element: we all
need time to think. Let the world keep up.
Yes, Rob, short-termism is certainly bad news, and rushing to achieve short-term goals
without thinking about them in the larger context is a good indication of disaster ahead.
Much of the zeitgeist that drives the frenzy you describe is generated by vendors
especially those with software in their portfolio. Software has more margin that hardware or
service. As a result they have more marketing budget. With that budget they invest and spend
a lot of time and effort to figure out exactly how to generate the frenzy with a new thing
that you must have. They have to do this to keep market interest in the products. That is
actually what their job is.
The frenzy is deliberately and I would say almost scientifically engineered by very very
bright marketing people in software vendors. Savvy IT organizations are aware of that
distinction and maintain their focus on enabling their business to be successful. IT as
Utility, On Demand, SOA, Cloud, ..... Software vendors will not and should not stop doing
that - that is what keeps them in business and generates profits that enable new innovation.
The onus is on the buyer to understand that whatever the latest technology is, does not
provide the answer for how they will improve business performance. Improving business
performance is the burden that only the organization can bear.
Am I missing something, or does your last example (in Bash) actually do something completely different?
It works for "ABX", but if you instead make word="Hi All" like the other examples,
it returns ha , not hi all . It only works for the capitalized letters
and skips the already-lowercased letters. –
jangosteve
Jan 14 '12 at 21:58
tr '[:upper:]' '[:lower:]' will use the current locale to determine uppercase/lowercase
equivalents, so it'll work with locales that use letters with diacritical marks. –
Richard Hansen
Feb 3 '12 at 18:58
$ string="A FEW WORDS"
$ echo "${string,}"
a FEW WORDS
$ echo "${string,,}"
a few words
$ echo "${string,,[AEIUO]}"
a FeW WoRDS
$ string="A Few Words"
$ declare -l string
$ string=$string; echo "$string"
a few words
To uppercase
$ string="a few words"
$ echo "${string^}"
A few words
$ echo "${string^^}"
A FEW WORDS
$ echo "${string^^[aeiou]}"
A fEw wOrds
$ string="A Few Words"
$ declare -u string
$ string=$string; echo "$string"
A FEW WORDS
Toggle (undocumented, but optionally configurable at compile time)
$ string="A Few Words"
$ echo "${string~~}"
a fEW wORDS
$ string="A FEW WORDS"
$ echo "${string~}"
a FEW WORDS
$ string="a few words"
$ echo "${string~}"
A few words
Capitalize (undocumented, but optionally configurable at compile time)
$ string="a few words"
$ declare -c string
$ string=$string
$ echo "$string"
A few words
Title case:
$ string="a few words"
$ string=($string)
$ string="${string[@]^}"
$ echo "$string"
A Few Words
$ declare -c string
$ string=(a few words)
$ echo "${string[@]}"
A Few Words
$ string="a FeW WOrdS"
$ string=${string,,}
$ string=${string~}
$ echo "$string"
To turn off a declare attribute, use + . For example, declare
+c string . This affects subsequent assignments and not the current value.
The declare options change the attribute of the variable, but not the contents.
The reassignments in my examples update the contents to show the changes.
Edit:
Added "toggle first character by word" ( ${var~} ) as suggested by ghostdog74
Quite bizzare, "^^" and ",," operators don't work on non-ASCII characters but "~~" does... So
string="łódź"; echo ${string~~} will return "ŁÓDŹ", but echo ${string^^}
returns "łóDź". Even in LC_ALL=pl_PL.utf-8 . That's using bash 4.2.24. –
Hubert Kario
Jul 12 '12 at 16:48
@HubertKario: That's weird. It's the same for me in Bash 4.0.33 with the same string in
en_US.UTF-8 . It's a bug and I've reported it. –
Dennis Williamson
Jul 12 '12 at 18:20
@HubertKario: Try echo "$string" | tr '[:lower:]' '[:upper:]' . It will probably
exhibit the same failure. So the problem is at least partly not Bash's. –
Dennis Williamson
Jul 13 '12 at 0:44
@RichardHansen: tr doesn't work for me for non-ACII characters. I do have correct
locale set and locale files generated. Have any idea what could I be doing wrong? –
Hubert Kario
Jul 12 '12 at 16:56
I strongly recommend the sed solution; I've been working in an environment that for
some reason doesn't have tr but I've yet to find a system without sed
, plus a lot of the time I want to do this I've just done something else in sed anyway
so can chain the commands together into a single (long) statement. –
Haravikk
Oct 19 '13 at 12:54
The bracket expressions should be quoted. In tr [A-Z] [a-z] A , the shell may perform
filename expansion if there are filenames consisting of a single letter or nullgob is set.
tr "[A-Z]" "[a-z]" A will behave properly. –
Dennis
Nov 6 '13 at 19:49
@CamiloMartin it's a BusyBox system where I'm having that problem, specifically Synology NASes,
but I've encountered it on a few other systems too. I've been doing a lot of cross-platform shell
scripting lately, and with the requirement that nothing extra be installed it makes things very
tricky! However I've yet to encounter a system without sed –
Haravikk
Jun 15 '14 at 10:51
Note that tr [A-Z] [a-z] is incorrect in almost all locales. for example, in the
en-US locale, A-Z is actually the interval AaBbCcDdEeFfGgHh...XxYyZ
. – fuz
Jan 31 '16 at 14:54
@JESii both work for me upper -> lower and lower-> upper. I'm using sed 4.2.2 and Bash 4.3.42(1)
on 64bit Debian Stretch. –
nettux443
Nov 20 '15 at 14:33
Hi, @nettux443... I just tried the bash operation again and it still fails for me with the error
message "bad substitution". I'm on OSX using homebrew's bash: GNU bash, version 4.3.42(1)-release
(x86_64-apple-darwin14.5.0) –
JESii
Nov 21 '15 at 17:34
Do not use! All of the examples which generate a script are extremely brittle; if the value
of a contains a single quote, you have not only broken behavior, but a serious security
problem. – tripleee
Jan 16 '16 at 11:45
I wonder if you didn't let some bashism in this script, as it's not portable on FreeBSD sh: ${1:$...}:
Bad substitution –
Dereckson
Nov 23 '14 at 19:52
I would like to take credit for the command I wish to share but the truth is I obtained it
for my own use from http://commandlinefu.com
. It has the advantage that if you cd to any directory within your own home folder
that is it will change all files and folders to lower case recursively please use with caution.
It is a brilliant command line fix and especially useful for those multitudes of albums you have
stored on your drive.
This didn't work for me for whatever reason, though it looks fine. I did get this to work as an
alternative though: find . -exec /bin/bash -c 'mv {} `tr [A-Z] [a-z] <<< {}`' \; –
John Rix
Jun 26 '13 at 15:58
For Bash versions earlier than 4.0, this version should be fastest (as it doesn't
fork/exec any commands):
function string.monolithic.tolower
{
local __word=$1
local __len=${#__word}
local __char
local __octal
local __decimal
local __result
for (( i=0; i<__len; i++ ))
do
__char=${__word:$i:1}
case "$__char" in
[A-Z] )
printf -v __decimal '%d' "'$__char"
printf -v __octal '%03o' $(( $__decimal ^ 0x20 ))
printf -v __char \\$__octal
;;
esac
__result+="$__char"
done
REPLY="$__result"
}
If using v4, this is
baked-in
. If not, here is a simple, widely applicable solution. Other answers (and comments) on this thread
were quite helpful in creating the code below.
# Like echo, but converts to lowercase
echolcase () {
tr [:upper:] [:lower:] <<< "${*}"
}
# Takes one arg by reference (var name) and makes it lowercase
lcase () {
eval "${1}"=\'$(echo ${!1//\'/"'\''"} | tr [:upper:] [:lower:] )\'
}
Notes:
Doing: a="Hi All" and then: lcase a will do the same thing as:
a=$( echolcase "Hi All" )
In the lcase function, using ${!1//\'/"'\''"} instead of ${!1}
allows this to work even when the string has quotes.
In spite of how old this question is and similar to
this answer by technosaurus
. I had a hard time finding a solution that was portable across most platforms (That I Use) as
well as older versions of bash. I have also been frustrated with arrays, functions and use of
prints, echos and temporary files to retrieve trivial variables. This works very well for me so
far I thought I would share. My main testing environments are:
GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu)
GNU bash, version 3.2.57(1)-release (sparc-sun-solaris2.10)
lcs="abcdefghijklmnopqrstuvwxyz"
ucs="ABCDEFGHIJKLMNOPQRSTUVWXYZ"
input="Change Me To All Capitals"
for (( i=0; i<"${#input}"; i++ )) ; do :
for (( j=0; j<"${#lcs}"; j++ )) ; do :
if [[ "${input:$i:1}" == "${lcs:$j:1}" ]] ; then
input="${input/${input:$i:1}/${ucs:$j:1}}"
fi
done
done
Simple C-style for loop
to iterate through the strings. For the line below if you have not seen anything like this before
this is where
I learned this . In this case the line checks if the char ${input:$i:1} (lower case) exists
in input and if so replaces it with the given char ${ucs:$j:1} (upper case) and stores it back
into input.
Many answers using external programs, which is not really using Bash .
If you know you will have Bash4 available you should really just use the ${VAR,,}
notation (it is easy and cool). For Bash before 4 (My Mac still uses Bash 3.2 for example). I
used the corrected version of @ghostdog74 's answer to create a more portable version.
One you can call lowercase 'my STRING' and get a lowercase version. I read comments
about setting the result to a var, but that is not really portable in Bash , since
we can't return strings. Printing it is the best solution. Easy to capture with something like
var="$(lowercase $str)" .
How this works
The way this works is by getting the ASCII integer representation of each char with printf
and then adding 32 if upper-to->lower , or subtracting 32
if lower-to->upper . Then use printf again to convert the number back
to a char. From 'A' -to-> 'a' we have a difference of 32 chars.
Using printf to explain:
$ printf "%d\n" "'a"
97
$ printf "%d\n" "'A"
65
97 - 65 = 32
And this is the working version with examples.
Please note the comments in the code, as they explain a lot of stuff:
#!/bin/bash
# lowerupper.sh
# Prints the lowercase version of a char
lowercaseChar(){
case "$1" in
[A-Z])
n=$(printf "%d" "'$1")
n=$((n+32))
printf \\$(printf "%o" "$n")
;;
*)
printf "%s" "$1"
;;
esac
}
# Prints the lowercase version of a sequence of strings
lowercase() {
word="$@"
for((i=0;i<${#word};i++)); do
ch="${word:$i:1}"
lowercaseChar "$ch"
done
}
# Prints the uppercase version of a char
uppercaseChar(){
case "$1" in
[a-z])
n=$(printf "%d" "'$1")
n=$((n-32))
printf \\$(printf "%o" "$n")
;;
*)
printf "%s" "$1"
;;
esac
}
# Prints the uppercase version of a sequence of strings
uppercase() {
word="$@"
for((i=0;i<${#word};i++)); do
ch="${word:$i:1}"
uppercaseChar "$ch"
done
}
# The functions will not add a new line, so use echo or
# append it if you want a new line after printing
# Printing stuff directly
lowercase "I AM the Walrus!"$'\n'
uppercase "I AM the Walrus!"$'\n'
echo "----------"
# Printing a var
str="A StRing WITH mixed sTUFF!"
lowercase "$str"$'\n'
uppercase "$str"$'\n'
echo "----------"
# Not quoting the var should also work,
# since we use "$@" inside the functions
lowercase $str$'\n'
uppercase $str$'\n'
echo "----------"
# Assigning to a var
myLowerVar="$(lowercase $str)"
myUpperVar="$(uppercase $str)"
echo "myLowerVar: $myLowerVar"
echo "myUpperVar: $myUpperVar"
echo "----------"
# You can even do stuff like
if [[ 'option 2' = "$(lowercase 'OPTION 2')" ]]; then
echo "Fine! All the same!"
else
echo "Ops! Not the same!"
fi
exit 0
And the results after running this:
$ ./lowerupper.sh
i am the walrus!
I AM THE WALRUS!
----------
a string with mixed stuff!
A STRING WITH MIXED STUFF!
----------
a string with mixed stuff!
A STRING WITH MIXED STUFF!
----------
myLowerVar: a string with mixed stuff!
myUpperVar: A STRING WITH MIXED STUFF!
----------
Fine! All the same!
This should only work for ASCII characters though .
For me it is fine, since I know I will only pass ASCII chars to it.
I am using this for some case-insensitive CLI options, for example.
Yves here. On the one hand, as someone who is getting to be pretty long in
tooth, I'm not sure about calling un and under-employed older workers
"spare". But when the alternative is being thrown on the trash heap, maybe
that isn't so unflattering.
Even though this analysis is from Australia,
most of if not all of its finding would almost certainly prove out in the
US. However, there is a whole 'nother set of issues here. Australia is 85%
urban, with most of the population living in or near four large cities. So
its labor mobility issues are less pronounced than here. Moreover, a lot of
the whinging in the US about worker shortages, as even readers of the Wall
Street Journal regularly point out in its comment section is:
1. Not being willing to pay enough to skilled workers, which includes
not being willing to pay them to relocate
2. Not being willing to train less skilled workers, as companies once
did as a matter of course
A few weeks back, the Benevolent Society
released a report
which found that age-related discrimination is
particularly rife in the workplace, with over a quarter (29%) of survey
respondents stating they had been turned down for a job because of their old
age, whereas 14% claimed they had been denied a promotion because of their
old age.
Today, the Regional Australia Institute (RAI) has warned that Australia
is facing a pension crisis unless employers stop their "discrimination"
against older workers. From
The ABC
:
[RAI] has warned the Federal Government's pension bill would rise from
$45 billion to $51 billion within three years, unless efforts were made
to help more mature workers gain employment, particularly in regional
communities.
Chief executive Jack Archer said continued unemployment of people
older than 55 would cut economic growth and put a greater strain on
public resources.
"We hear that there is a lot of people who would like to work, who
would love to stay in the workforce either part-time or full-time even
though they're in their late 50s, 60s and even into their 70s," he said.
"But we're not doing a very good job of giving them the training,
giving them the incentives around the pension, and working with employers
to stop the discrimination around employing older workers"
"It basically means you've got a lot of talent on the bench, a lot of
people who could be involved and contributing who are sitting around
homes and wishing they were doing something else," he said
Mr Archer said as the population aged the workforce shrank, and that
risked future economic growth.
But he said that could be reversed provided employers embraced an
older workforce
"[When] those people are earning [an income], their pension bills will
either disappear or be much lower and the government will get a benefit
from that."
For years the growth lobby and the government has told us that Australia
needs to run high levels of immigration in order to alleviate so-called
'skills shortages' and to mitigate an ageing population. This has come
despite the Department of Employment
showing
that Australia's skills shortage
"remains low by historical
standards"
and Australia's labour underutilisation rate tracking at
high levels:
Economic models are often cited as proof that a strong immigration
program is 'good' for the economy because they show that real GDP per capita
is moderately increased via immigration, based on several dubious
assumptions.
The most dubious of these assumptions is that population ageing will
necessarily result in fewer people working, which will subtract from per
capita GDP (due to the ratio of workers to dependents falling).
Leaving aside the fact that the assumed benefit to GDP per capita from
immigration is only transitory, since migrants also age (thereby requiring
an ever-bigger immigration intake to keep the population age profile from
rising), it is just as likely that age-specific workforce participation will
respond to labour demand, resulting in fewer people being unemployed. This
is exactly what has transpired in Japan where an ageing population has
driven the unemployment rate down to only 2.8% – the lowest level since the
early-1990s:
The ABS
last month
revealed
that more Australians are working past traditional retirement
age, thereby mitigating concerns that population ageing will necessarily
reduce the employment-to-population ratio:
Clearly, however, there is much further scope to boost workforce
participation among older workers.
Rather than relying on mass immigration to fill phantom 'labour
shortages' – in turn displacing both young and older workers alike – the
more sensible policy option is to moderate immigration and instead better
utilise the existing workforce as well as use automation to overcome any
loss of workers as the population ages – as has been
utilised in Japan.
It's worth once again highlighting that
economists at MIT
recently found that there is absolutely no
relationship between population ageing and economic decline. To the
contrary, population ageing seems to have been associated with improvements
in GDP per capita, thanks to increased automation:
If
anything, countries experiencing more rapid aging have grown more in
recent decades we show that since the early 1990s or 2000s, the periods
commonly viewed as the beginning of the adverse effects of aging in much
of the advanced world, there is no negative association between aging and
lower GDP per capita on the contrary, the relationship is significantly
positive in many specifications.
The last thing that Australia should be doing is running a mass
immigration program which, as
noted many times
by the Productivity Commission cannot provide a
long-term solution to ageing, and places increasing strains on
infrastructure, housing and the natural environment.
The sustainable 'solution' to population ageing is to better utilise the
existing workforce, where significant spare capacity exists.
At what point might an impatient constituency demand greater
accountability by its elected representatives? In the business world, the
post-2000 accounting scandals like Enron resulted in legislation to make
company execs sign off on financial statements under threat of harsh
personal penalties for misrepresentation. If legislators were forced by
constituents to enact similar legislation about their own actions, the
transparency could be very enlightening and a type of risk reduction due to
acknowledgement of material factors. Imagine seeing in print the real
reasons for votes, the funding sources behind those votes and prospect of
jail time for misrepresentation about what is just their damn job. Call it
Truth-In-Legislating, similar to the prior Truth-In-Lending act.
It's a nice idea, but I don't think that very many executives have
been penalized under the Sarbanes Oxley Act. Jamie Dimon certainly wasn't
penalized for the actions of the London Whale. I guess we'll see what
happens in the near future to the executives of Wells Fargo. I suspect
that a Truth-In-Legislating law would be filled with loopholes or would
be hampered by enforcement failures, like current Congressional ethics
rules and the Sarbanes Oxley Act.
At what point might an impatient constituency demand greater
accountability by its elected representatives?
At that point when they start shooting them (as they did in Russian in
the very early 1900s, or lop their heads off, as they once did in
France).
Personally, I'll never work for any Ameritard corporation ever again,
as real innovation is not allowed, and the vast majority are all about
financialization in some form or other!
My work life the past thirty years became worse and worse and worse,
in direct relation to the majority of others, and my last jobs were
beyond commenting up.
My very last position, which was in no manner related to my
experience, education, skill set and talents -- like too many other
American workers -- ended with a most tortuous layoff: the private equity
firm which was owner in a failed "pump and dump" brought a "toxic work
environment specialist" whose job was to advise the sleazoid senior
executives (and by that time I was probably one of only four actual
employee workers there, they had hired a whole bunch of executives,
though) on how to create a negative work environment to convince us to
leave instead of merely laying us off (worked for two, but not the last
lady there I myself).
The American workplace sucks big time as evidenced by their refusal to
raise wages while forever complaining about their inability to find
skilled employees -- they are all criminals today!
I lived and worked in Australia in the late '70s and early '80s. Times
were different. Back then, the government jobs came with mandatory
retirement. I believe (but could be wrong) that it was at 63, but you could
request staying until 65 (required approval). After that, one could continue
working in the private sector, if you could find a job.
The population was much less than it is now. I believe the idea was to
make room for the younger generation coming up. Back then, government
workers, as well as many private sector workers, had defined benefit pension
plans. So retiring younger typically worked out ok.
I had one friend who continued working until about 70 because she wanted
to; liked her job; and wasn't interested in retiring. However, I knew far
more people who were eager to stop at 63. But back then, it appeared to me
that they had the financial means to do so without much worry.
Things have changed since then. More of my friends are putting off
retirement bc they need the money now. Plus defined benefit pension plans
have mostly been dispensed with and replaced by, I believe (I'm not totally
clear on this), the Aussie version of a 401 (k) (someone can correct me if
I'm wrong).
What the article proposes makes sense. Of course here in the USA, older
workers/job seekers face a host of discriminatory practices, especially for
the better paying jobs. Nowadays, though, US citizens in their golden years
can sell their house, buy an RV, and become itinerant workers – sometimes at
back breaking labor, such as harvesting crops or working at an Amazon gulag
– for $10 an hour. Yippee kay-o kay-aaay!
So let us also talk about cutting Medicare for all of those lazy slacker
Seniors out there. Woo hoo!
There is really two issues:
1) for those whom age discrimination in employment is hitting in their
50s or even younger, before anyone much is retiring, it needs to be
combatted
2) eventually (sometimes in their 60's and really should be at least by
65) people ought to be allowed to retire and with enough money to not be
in poverty. This work full time until you drop garbage is just that (it's
not as if 70 year olds can even say work 20 hours instead, no it's the
same 50+ hours or whatever as everyone else is doing). And most people
won't live that much longer, really they won't, U.S. average lifespans
aren't that long and falling fast. So it really is work until you die
that is being pushed if people aren't allowed to retire sometime in their
60s. Some people have good genes and good luck and so on (they may also
have a healthy lifestyle but sheer luck plays a large role), and will
live far beyond that, but averages
Working past 65 is one of those things where it just depends. I
know people who are happily (and don't "really" need the money)
working past 65 bc they love their jobs and they're not taking a toll
on their health. They enjoy the socialization at work; are
intellectually stimulated; and are quite happy. That's one issue.
But when people HAVE TO work past 65 – and I know quite a few in
this category – when it starts taking a toll on their health, that is
truly bad. And I can reel off several cases that I know of personally.
It's just wrong.
Whether you live much longer or not is sort of up to fate, no
matter what. But yes, if work is taking a toll on your heath, then you
most likely won't live as long.
In January, economists from MIT published a paper, entitled Secular
Stagnation? The Effect of Aging on Economic Growth in the Age of
Automation, which showed that there is absolutely no relationship between
population aging and economic decline. To the contrary, population aging
seems to have been associated with improvements in GDP per capita, thanks
to increased automation:
From the cited article.
I don't know why it never occurred to me before, but there's no reason to
ditch your most knowledgeable, most skilled workers toward the eve of their
careers except if you don't want to pay labor costs. Which we know that
most firms do not, in their mission for profit for shareholders or the
flashy new building or trying to
Innuhvate
.
There's a myth that innovation comes from the 20 something in their
basement, but that's just not the case. Someone who has, for instance,
overseen 100 construction projects building bridges needs to be retained,
not let go. Maybe they can't lift the sledge anymore, but I'd keep them on
as long as possible.
1. Not being willing to pay enough to skilled workers, which includes not
being willing to pay them to relocate
2. Not being willing to train less skilled workers, as companies once did
as a matter of course
3.
older workers have seen all the crap and evil management has
done, and is usually in a much better position than young less established
employees to take effective action against it
This. Don't expect rational actors, in management or labor. If
everyone was paid the same, regardless of age or training or education or
experience etc then the financial incentives for variant outcomes would
decrease. Except for higher health costs for older workers. For them, we
could simply ban employer provide health insurance then that takes that
variable out of the equation too. So yes, the ideal is a rational Marxism
or the uniformity of the hive-mind-feminism. While we would have "from
each according to their ability, to each according to their need" we will
have added it as an axiom that all have the same need. And a whip can
encourage the hoi polloi to do their very best.
Fully agee! To your list I would add a corollory to your item #3 --
older workers having seen all the crap and evil management has done are
more likely to inspire other employees to feel and act with them. -- This
corollory is obvious but I think it bears stating for emphasis of the
point.
I believe your whole list might be viewed as symtoms resulting from
the concept of workers as commodity -- fungible as cogs on a wheel. Young
and old alike are dehumanized.
The boss of the branch office of the firm I last worked for before I
retired constantly emphasized how each of us must remain "fungible" [he's
who introduced me to this word] if we wanted to remain employed. The firm
would win contracts using one set of workers in its bids and slowly
replace them with new workers providing the firm a higher return per hour
billed to the client. I feel very lucky I managed to remain employed -- to
within a couple of years of the age when I could apply for Medicare.
[Maybe it's because I was too cowed to make waves and avoided raises as
best I could.]
[I started my comment considering the idea of "human capital" but ran
into trouble with that concept. Shouldn't capital be assessed in terms of
its replacement costs and its capacity for generating product or other
gain? I had trouble working that calculus into the way firms treat their
employees and decided "commodity" rather than "capital" better fit how
workers were regarded and treated.]
"skills vs. demand imbalance" not labor shortage. Capital wants to tip
the scale the other way, but isn't willing to invest the money to train the
people, per a comment I made last week. Plenty of unemployed or
under-employed even in Japan, much less Oz.
Keeping the elderly, who already have the skills, in the work place
longer is a way to put off making the investments. Getting government to tax
the poor for their own training is another method. Exploiting poor nations
education systems by importing skills yet another.
Some business hope to develop skills that only costs motive power
(electric), minimal maintenance, and are far less capital intensive and
quicker to the market than the current primary source's 18 years. Capitalism
on an finite resource will eat itself, but even capitalism with finite
resources will self-destruct in the end.
Importantly, the chart labeled as Figure 2 uses GDP
per capita
on
the y-axis.
Bearing in mind that GDP growth is composed of labor force growth times
productivity, emerging economies that are growing faster than the rich world
in both population and GDP look more anemic on a per capita basis, allowing
us rich country denizens to feel better about our good selves. :-)
But in terms of absolute GDP growth, things ain't so bright here in the
Homeland. Both population and productivity growth are slowing. Over the past
two-thirds century, the trend in GDP groaf is relentlessly down, even as
debt rises in an apparent attempt to maintain unsustainable living
standards. Chart (viewer discretion advised):
Van Onselen doesn't address the rich world's busted pension systems. To
the extent that they contain a Ponzi element premised on endless growth,
immigration would modestly benefit them by adding new
victims
workers to support the greying masses of doddering Boomers.
Will you still need me
Will you still feed me
When I'm sixty-four?
There's been an increase in the employment of older people in the U.S. in
the U.S. population. To provide a snapshot, below are three tables referring
to the U.S. by age cohorts of 1) the total population, 2) Employment and 3)
employment-population ratios (percent).based on Bureau of Labor Statistics
weightings for population estimates and compiled in the Merge Outgoing
Rotation Groups (MORG) dataset by the National Bureau of Economic Research
(NBER) from the monthly Current Population Survey (CPS).
The portion of the population 16 to 54 has declined while those over 54
has increased.
1. Percent Population in Age Cohorts: 1986 & 2016
1986 2016 AGE
18.9 15.2 16-24
53.7 49.6 25-54
12.2 16.3 55-64
9.4 11.2 65-74
5.8 7.7 75 & OVER
100.0 100.0 ALL
The portion of the population 16 to 54 employed has declined while the
portion over 54 has increased..
2 Percent Employed in Age Cohorts: 1986 & 2016
1986 2016 AGE
18.5 12.5 16-24
68.4 64.7 25-54
10.4 16.9 55-64
2.3 4.8 65-74
0.4 1.0 75 & OVER
100.0 100.0 ALL
The employment-population ratios (percents) show significant declines for
those under 25 while increases for those 55 and above.
3. Age-Specific Employment Population Ratios (Percents)
1986 2016 AGE
59.5 49.4 16-24
77.3 77.9 25-54
51.8 61.8 55-64
14.8 25.9 65-74
3.8 7.9 75 & OVER
60.7 59.7 ALL
None of the above data refute claims about age and experience inequities.
Rather these provide a base from which to explore such concerns. Because
MORG data are representative samples with population weightings, systematic
contingency analyses are challenging.
In the 30 year interval of these data there have been changes in
population and employment by education status, gender, race, citizenship
status along with industry and occupation, all items of which are found in
the publicly available MORG dataset.
I think you are missing the point. Life expectancy at birth has
increased by nearly five years since 1986. That renders simple
comparisons of labor force participation less meaningful. The implication
is that many people are not just living longer but are in better shape in
their later middle age. Look at the dramatic drop in labor force
participation from the 25-54 age cohort v. 55 to 64. How can so few
people in that age group be working given that even retiring at 65 is
something most people cannot afford? And the increase over time in the
current 55=64 age cohort is significantly due to the entry of women into
the workplace. Mine was the first generation where that became
widespread.
The increase in the over 65 cohort reflects desperation. Anyone who
can work stays working.
Even if life-expectancy is increasing due to improved health, the
percentage of those in older cohorts who are working is increasing at
an even faster rate. If a ratio is 6/8 for a category and goes up to
10/12 the category has increased (8 to 12 or 50%) and the subcategory
has increased (6 to 10 i or 67% and the ratios go from 6/8 or 75/100
to 10/12 or 83.3/100)
I assume you are referencing the employment-population (E/P) ratio
when noting "the dramatic drop in labor force participation from the
25-54 age cohort v. 55 to 64." However the change in the E/P ratio for
25-54 year olds was virtually unchanged (77.3/100 in 1986 to 77.9/100
in 2016) and for the 55-64 year olds the E/P ratio INCREASED
significantly, from 51.8/100 in 1986 to 61.8/100 in 2016.
You query: "How can so few people in that age group be working
given that even retiring at 65 is something most people cannot
afford?" That's a set of concerns the data I've compiled cannot and
thus cannot address. It would take more time to see if an empirical
answer could be constructed, something that doesn't lend itself to
making a timely, empirically based comment. The data I compiled was
done after reading the original post.
You note: ". . . ;;[T] the increase over time in the current 55-64
age cohort is significantly due to the entry of women into the
workplace." Again, I didn't compute the age-gender specific E/P
ratios. I can do that if there's interest. The OVERALL female E/P
ratio (from FRED) did not significantly increase from December 1986 (
51.7/100) to December 2016 (53.8/100).
Your write: "The increase in the over 65 cohort reflects
desperation. Anyone who can work stays working." Again, the data I was
using provided me no basis for this interpretation. I suspect that the
MORG data can provide some support for that interpretation. However,
based on your comments about longer life expectancy, it's likely that
a higher proportion of those in professional-middle class or in the
upper-middle class category Richard Reeves writes about (Dream
Hoarders) were able and willing to continue working. For a time in
higher education some institutions offered incentives for older
faculty to continue working thereby they could continue to receive a
salary and upon becoming eligible for Social Security draw on that
benefit. No doubt many, many vulnerable older people, including
workers laid off in the wake of the Great Recession and otherwise
burdened lengthened their or sought employment.
Again the MORG data can get somewhat closer to your concerns and
interests, but whether this is the forum is a challenge given the
reporting-comment cycle which guides this excellent site.
I don't understand how the media promotes the "society is aging, we need
more immigrants to avoid a labor shortage" argument and the "there will be
no jobs in the near future due to automation, there will be a jobs shortage"
argument at the same time. Dean Baker has discussed this issue:
In any event, helping to keep older workers in the workforce can be a
good thing. Some people become physically inactive after retirement and
their social networks decline which can cause depression and loneliness.
Work might benefit some people who would otherwise sink into inactivity and
loneliness.
Of course, results might vary based on individual differences and those
who engaged in hard physical labor will likely have to retire earlier due to
wear and tear on their bodies.
Increase in life expectancy is greatly influenced by a decrease in
childhood mortality. People are living longer because they aren't dying
in large numbers in childhood anymore in the US. So many arguments that
start out "we're living longer, so something" confuse a reduction in
childhood mortality with how long one can expect to live to in old age,
based on the actuarial charts. Pols who want to cut SS or increase the
retirement age find this confusion very useful.
"
Life expectancy at birth is very sensitive to reductions in the
death rates of children, because each child that survives adds many years
to the amount of life in the population. Thus, the dramatic declines in
infant and child mortality in the twentieth century were accompanied by
equally stunning increases in life expectancy.
"
I've noticed ever since the 1990s that "labor shortage" is a signal for
cost-cutting measures that trigger a recession. Which then becomes the
excuse for shedding workers and really getting the recession on.
It is not just older workers who are spare. There are other forms of
discrimination that could fall by the wayside if solving the "labor
shortage" was the sincere objective.
Often productively, sales, and profits decrease with those cost
cuttings, which justified further cuts which decreases productivity,
sales, and profits which justifies
It's a pattern I first noticed in the 1990s and looking back in the
80s too. It's like some malevolent MBAs went out and convinced the whole
of American middle and senior business management that this was the Way
to do it. It's like something out of the most hidebound, nonsensical
ideas of Maoism and Stalinism as something that could not fail but only
be failed. It is right out of the Chicago Boys' economics playbook.
Thirty-five years later and the Way still hasn't succeeded, but they're
still trying not to fail it.
Love your reflections. Yeah, it's like a religion that they can't
pay more, can't train, must cut people till they are working to their
max at ordinary times (so have no slack for crises), etc. etc., and
that it doesn't work doesn't change the faith in it AT ALL.
This is ranting, but most jobs can be done at most ages. If want someone
to be a SEAL or do 12 hours at farm labor no of course not, but just about
everything else so what's the problem?
All this "we have a skilled labor shortage" or "we have a labor surplus"
or "the workers are all lazy/stupid" narratives" and "it's the unions'
fault" and "the market solves everything" and the implicit "we are a true
meritocracy and the losers are waste who deserve their pain" and my favorite
of the "Job creators do make jobs" being said, and/or believed all at the
same time is insanity made mainstream.
Sometimes I think whoever is running things are told they have to drink
the Draught of UnWisdom before becoming the elites.
So I'm a middle aged fella – early thirties – and have to admit that in
my industry I find that most older workers are a disaster. I'm in tech and
frankly find that most older workers are a detriment simply from being out
of date. While I sympathize, in some cases experience can be a minus rather
than a plus. The willingness to try new things and stay current with modern
technologies/techniques just isn't there for the majority of tech workers
that are over the hill.
The here-document is great, but it's messing up your shell script's formatting. You want to
be able to indent for readability. Solution
Use <<- and then you can use tab characters (only!) at the beginning of lines to
indent this portion of your shell script.
$ cat myscript.sh
...
grep $1 <<-'EOF'
lots of data
can go here
it's indented with tabs
to match the script's indenting
but the leading tabs are
discarded when read
EOF
ls
...
$
Discussion
The hyphen just after the << is enough to tell bash to ignore the leading tab
characters. This is for tab characters only and not arbitrary white space. This is
especially important with the EOF or any other marker designation. If you have
spaces there, it will not recognize the EOF as your ending marker, and the "here"
data will continue through to the end of the file (swallowing the rest of your script).
Therefore, you may want to always left-justify the EOF (or other marker) just to
be safe, and let the formatting go on this one line.
The Bourne shell provides here documents to allow block of data to be passed to a process
through STDIN. The typical format for a here document is something similar to this:
command <<ARBITRARY_TAG
data to pass 1
data to pass 2
ARBITRARY_TAG
This will send the data between the ARBITRARY_TAG statements to the standard input of the
process. In order for this to work, you need to make sure that the data is not indented. If you
indent it for readability, you will get a syntax error similar to the following:
./test: line 12: syntax error: unexpected end of file
To allow your here documents to be indented, you can append a "-" to the end of the
redirection strings like so:
if [ "${STRING}" = "SOMETHING" ]
then
somecommand <<-EOF
this is a string1
this is a string2
this is a string3
EOF
fi
You will need to use tabs to indent the data, but that is a small price to pay for added
readability. Nice!
"... In the 1970s a programming shop was legacy American, with only a thin scattering of foreigners like myself. Twenty years later programming had been considerably foreignized , thanks to the H-1B visa program. Now, twenty years further on, I believe legacy-American programmers are an endangered species. ..."
"... So a well-paid and mentally rewarding corner of the middle-class job market has been handed over to foreigners -- for the sole reason, of course, that they are cheaper than Americans. The desire for cheap labor explains 95 percent of U.S. immigration policy. The other five percent is sentimentality. ..."
"... Now they are brazen in their crime: you have heard, I'm sure, those stories about American workers being laid off, with severance packages conditional on their helping train their cheaper foreign replacements. That's our legal ..."
"... A "merit-based" points system won't fix that. It will quickly and easily be gamed by employers to lay waste yet more middle-class occupational zones for Americans. If it was restricted to the higher levels of "merit," we would just be importing a professional overclass of foreigners, most East and South Asians, to direct the labors of less-meritorious legacy Americans. How would that ..."
"... Measured by the number of workers per year, the largest guestworker program in the entire immigration system is now student visas through the Optional Practical Training program (OPT). Last year over 154,000 aliens were approved to work on student visas. By comparison, 114,000 aliens entered the workforce on H-1B guestworker visas. ..."
"... A History of the 'Optional Practical Training' Guestworker Program , ..."
"... incredible amount ..."
"... on all sorts of subjects ..."
"... for all kinds of outlets. (This ..."
"... no longer includes ..."
"... National Review, whose editors had some kind of tantrum and ..."
"... and several other ..."
"... . He has had two books published by VDARE.com com: ..."
"... ( also available in Kindle ) and ..."
"... Has it ever occurred to anyone other than me that the cost associated with foreign workers using our schools and hospitals and pubic services for free, is more than off-set by the cheap price being paid for grocery store items like boneless chicken breast, grapes, apples, peaches, lettuce etc, which would otherwise be prohibitively expensive even for the wealthy? ..."
Item-wise, the biggest heading there is the second one, "Interior Enforcement." That's very
welcome.
Of course we need improved border security so that people don't enter our country without
permission. That comes under the first heading. An equally pressing problem, though, is the
millions of foreigners who are living and working here, and using our schools and hospitals and
public services, who should not be here.
The President's proposals on interior enforcement cover all bases: Sanctuary
cities , visa
overstays , law-enforcement
resources , compulsory E-Verify , more
deportations , improved visa security.
This is a major, wonderful improvement in national policy, when you consider that less than
a year ago the
White House and
Justice Department were run by committed open-borders
fanatics. I thank the President and his staff for having put so much work into such a
detailed proposal for restoring American sovereignty and the rights of American workers and
taxpayers.
That said, here come the quibbles.
That third heading, "Merit-Based Immigration System," with just four items, needs work.
Setting aside improvements on visa controls under the other headings, this is really the only
part of the proposal that covers legal immigration. In my opinion, it does so imperfectly.
There's some good meat in there, mind. Three of the four items -- numbers one, three, and
four -- got a fist-pump from me:
cutting down chain
migration by limiting it to spouse and dependent children; eliminating the Diversity
Visa Lottery ; and limiting the number of refugees admitted, assuming this means severely
cutting back on the numbers, preferably all the way to
zero.
Good stuff. Item two, however, is a problem. Quote:
Establish a new, points-based system for the awarding of Green Cards (lawful permanent
residents) based on factors that allow individuals to successfully assimilate and support
themselves financially.
sounds OK, bringing in talented, well-educated, well-socialized people, rather than
what the late Lee
Kuan Yew referred to as " fruit-pickers ." Forgive
me if I have a rather jaundiced view of this merit-based approach.
For most of my adult life I made a living as a computer programmer. I spent four years
doing this in the U.S.A. through the mid-1970s. Then I came back in the late 1980s and
worked at the same trade here through the 1990s. (Pictured right–my actual H-1B visa ) That gave me two
clear snapshots twenty years apart, of this particular corner of skilled middle-class
employment in America.
In the 1970s a programming shop was legacy American, with only a thin scattering of
foreigners like myself. Twenty years later programming had been considerably foreignized ,
thanks to the H-1B visa program. Now, twenty years further on, I believe legacy-American
programmers are an endangered species.
So a well-paid and mentally rewarding corner of the middle-class job market has been
handed over to foreigners -- for the sole reason, of course, that they are cheaper than
Americans. The desire for cheap labor explains 95 percent of U.S. immigration policy. The other
five percent is sentimentality.
On so-called "merit-based immigration," therefore, you can count me a cynic. I have no doubt
that American firms could recruit all the computer programmers they need from among our legacy
population. They used to do so, forty years ago. Then they discovered how to game the
immigration system for cheaper labor.
A "merit-based" points system won't fix that. It will quickly and easily be gamed by
employers to lay waste yet more middle-class occupational zones for Americans. If it was
restricted to the higher levels of "merit," we would just be importing a professional overclass
of foreigners, most East and South Asians, to direct the labors of less-meritorious legacy
Americans. How would that contribute to social harmony?
With coming up to a third of a
billion people, the U.S.A. has all the talent, all the merit , it needs. You might
make a case for a handful of certified geniuses like Einstein or worthy dissidents like
Solzhenitsyn, but those cases aside, there is no reason at all to have guest-worker programs.
They should all be shut down.
Some of these cheap-labor rackets don't even need congressional action to shut them down; it
can be done by regulatory change via executive order. The scandalous OPT-visa scam, for
example, which brings in cheap workers under the guise of student visas.
Here is John Miano writing about the OPT program last month, quote:
Measured by the number of workers per year, the largest guestworker program in the
entire immigration system is now student visas through the Optional Practical Training
program (OPT). Last year over 154,000 aliens were approved to work on student visas. By
comparison, 114,000 aliens entered the workforce on H-1B guestworker visas.
Because there is no reporting on how long guestworkers stay in the country, we do not know
the total number of workers in each category. Nonetheless, the number of approvals for work
on student visas has grown by 62 percent over the past four years so their numbers will soon
dwarf those on H-1B visas.
End quote. (And a cheery wave of acknowledgement to John Miano here from one of the
other seventeen people in the U.S.A. that knows the correct placement of the hyphen in
"H-1B.")
Our legal immigration system is addled with these scams. Don't even get me started
on
the EB-5 investor's visa . It all needs sweeping away.
So for preference I would rewrite that third heading to include, yes, items one, three, and
four -- cutting down chain migration, ending the Diversity Visa Lottery, and ending refugee
settlement for anyone of less stature than Solzhenitsyn; but then, I'd replace item two with
the following:
End all guest-worker programs, with exceptions only for the highest levels of
talent and accomplishment, limit one hundred visas per annum .
So much for my amendments to the President's October 8th proposals. There is, though, one
glaring omission from that 70-item list. The proposal has no mention at all of birthright
citizenship.
Yes, yes, I know: some constitutional authorities argue that birthright citizenship is
implied in the
Fourteenth Amendment , although it is certain that the framers of that Amendment did not
have foreign tourists or illegal entrants in mind. Other scholars think Congress could
legislate against it.
The only way to find out is to have Congress legislate. If the courts strike down the
legislation as unconstitutional, let's then frame a constitutional amendment and put it to the
people.
Getting rid of birthright citizenship might end up a long and difficult process. We might
ultimately fail. The only way to find out is to get the process started . Failure to
mention this in the President's proposal is a very glaring omission.
I agree with ending birthright citizenship. But Trump should wait until he can put at
least one more strict constitutionalist in the supreme court. There will be a court
challenge, and we need judges who can understand that if the 14th Amendment didn't give
automatic citizenship to American Indians it doesn't give automatic citizenship to children
of Mexican citizens who jumped our border.
John's article, it seems to me, ignores the elephant in the room: the DACA colonists.
Trump is offering this proposal, more or less, in return for some sort of semi-permanent
regularization of their status. Bad trade, in my opinion. Ending DACA and sending those
illegals back where they belong will have more real effect on illegal and legal
immigration/colonization than all sorts of proposals to be implemented in the future, which
can and will be changed by subsequent Administrations and Congresses.
Trump would also be able to drive a much harder bargain with Congress (like maybe a
moratorium on any immigration) if he had kept his campaign promise, ended DACA the afternoon
of January 20, 2017, and busloads of DACA colonists were being sent south of the Rio
Grande.
The best hope for immigration patriots is that the Democrats are so wedded to Open Borders
that the entire proposal dies and Trump, in disgust, reenacts Ike's Operation Wetback.
Well, in the real world, things just don't work that way. It's pay me now or pay me
later. Once all the undocumented workers who are doing all the dirty, nasty jobs Americans
refuse to do are run out the country, then what?
Right, prior to 1965, Americans didn't exist. They had all starved to death because, as
everyone knows, no Americans will work to produce food and, even if they did, once Tyson
chicken plants stop making 50 percent on capital they just shut down.
If there were no Somalis in Minnesota, even Warren Buffett couldn't afford grapes.
Illegal immigrants picking American produce is a false economy.
Illegal immigrants are subsidized by the taxpayer in terms of public health, education,
housing, and welfare.
If businesses didn't have access to cheap and subsidized illegal alien labor, they would
be compelled to resort to more farm automation to reduce cost.
Cheap illegal alien labor delays the inevitable use of newer farm automation
technologies.
Many Americans would likely prefer a machine touch their food rather than a illegal alien
with strange hygiene practices.
In addition, anti-American Democrats and neocons prefer certain kinds of illegal aliens
because they bolster their diversity scheme.
@Realist "Once all the undocumented workers who are doing all the dirty, nasty jobs
Americans refuse to do are run out the country, then what?"
Eliminate welfare...then you'll have plenty of workers. Unfortunately, that train left the
station long ago. With or without welfare, there's simply no way soft, spoiled, lazy,
over-indulged Americans who have never hit a lick at anything their life, will ever perform
manual labor for anyone, including themselves.
@Randal Probably people other than you have worked out that once their wages are not
being continually undercut by cheap and easy immigrant competition, the American working
classes will actually be able to earn enough to pay the increased prices for grocery store
items, especially as the Americans who, along with machines, will replace those immigrants
doing the "jobs Americans won't do" will also be earning more and actually paying taxes on
it.
The "jobs Americans/Brits/etc won't do" myth is a deliberate distortion of reality that
ignores the laws of supply and demand. There are no jobs Americans etc won't do, only jobs
for which the employers are not prepared to pay wages high enough to make them worthwhile for
Americans etc to do.
Now of course it is more complicated than that. There are jobs that would not be
economically viable if the required wages were to be paid, and there are marginal
contributions to job creation by immigrant populations, but those aspects are in reality far
less significant than the bosses seeking cheap labour want people to think they are.
As a broad summary, a situation in which labour is tight, jobs are easy to come by and
staff hard to hold on to is infinitely better for the ordinary working people of any nation
than one in which there is a huge pool of excess labour, and therefore wages are low and
employees disposable.
You'd think anyone purporting to be on the "left", in the sense of supporting working
class people would understand that basic reality, but far too many on the left have been
indoctrinated in radical leftist anti-racist and internationalist dogmas that make them
functional stooges for big business and its mass immigration program.
Probably people other than you have worked out that once their wages are not being
continually undercut by cheap and easy immigrant competition, the American working classes
will actually be able to earn enough to pay the increased prices for grocery store items,
especially as the Americans who, along with machines, will replace those immigrants doing
the "jobs Americans won't do" will also be earning more and actually paying taxes on
it.
There might be some truth in this. When I was a student in England in the 60′s I
spent every summer working on farms, picking hops, apples, pears, potatoes and made some
money and had a lot of fun too and became an expert farm tractor operator.
No reason why US students and high school seniors should not pick up a lot of the slack.
Young people like camping in the countryside and sleeping rough, plus lots of
opportunity to meet others, have sex, smoke weed, drink beer, or whatever. If you get a free
vacation plus a nice check at the end, that makes the relatively low wages worthwhile. It is
not always a question of how much you are paid, but how much you can save.
We can fix the EB-5 visa scam. My suggestion: charge would-be "investors" $1 million to
enter the US. This $1 is not refundable under any circumstance. It is paid when the
"investor's" visa is approved. If the "investor" is convicted of a felony, he is deported. He
may bring no one with him. No wife, no child, no aunt, no uncle. Unless he pays $1 million
for that person.
We will get a few thousand Russian oligarchs and Saudi princes a year under this
program
As to fixing the H-1B visa program, we charge employer users of the program say $25,000
per year per employee. We require the employers to inform all employees that if any is asked
to train a replacement, he should inform the DOJ immediately. The DOJ investigates and if
true, charges managerial employees who asked that a replacement be trained with fraud.
As to birthright citizenship: I say make it a five-year felony to have a child while in
the US illegally. Make it a condition of getting a tourist visa that one not be pregnant. If
the tourist visa lasts say 60 days and the woman has a child while in the US, she gets
charged with fraud.
None of these suggestions requires a constitutional amendment.
In the United States middle class prosperity reached its apogee in 1965 – before the
disastrous (and eminently foreseeable) wage-lowering consequence of the Hart-Celler Open
Immigration Act's massive admission of foreigners increased the supply of labor which began
to lower middle class prosperity and to shrink and eradicate the middle class.
It was in 1965 that ordinary Americans, enjoying maximum employment because employers were
forced to compete for Americans' talents and labor, wielded their peak purchasing
power . Since 1970 wages have remained stagnant, and since 1965 the purchasing power of
ordinary Americans has gone into steep decline.
It is long past time to halt Perpetual Mass Immigration into the United States, to end
birthright citizenship, and to deport all illegal aliens – if, that is, our leaders
genuinely care about and represent us ordinary Americans instead of continuing their
legislative, policy, and judicial enrichment of the 1-percenter campaign donor/rentier class
of transnational Globali$t Open Border$ E$tabli$hment $ellout$.
Re the birthright citizenship argument, that is not settled law in that SCOTUS has never
ruled on the question of whether a child born in the US is thereby a citizen if the parents
are illegally present. Way back in 1897, SCOTUS did resolve the issue of whether a child born
to alien parents who were legally present was thereby a citizen. That case is U.S. vs Wong
Kim Ark 169 US 649. SCOTUS ruled in favor of citizenship. If that was a justiciable issue how
much more so is it when the parents are illegally present?
My thinking is that the result would be the same but, at least, the question would be
settled. I cannot see justices returning a toddler to Beijing or worse. They would never have
invitations to cocktail parties again for the shame heaped upon them for such uncaring
conduct. Today, the title of citizen is conferred simply by bureaucratic rule, not by
judicial order.
Arguments Against Fourteenth Amendment Anchor Baby Interpretation
J. Paige Straley
Part One. Anchor Baby Argument, Mexican Case.
The ruling part of the US Constitution is Amendment Fourteen: "All persons born or
naturalized in the United States, and subject to the jurisdiction thereof, are citizens of
the United States and of the State wherein they reside."
Here is the ruling part of the Mexican Constitution, Section II, Article Thirty:
Article 30
Mexican nationality is acquired by birth or by naturalization:
A. Mexicans by birth are:
I. Those born in the territory of the Republic, regardless of the nationality of
their parents:
II. Those born in a foreign country of Mexican parents; of a Mexican father and
a foreign mother; or of a Mexican mother and an unknown father;
III. Those born on Mexican vessels or airships, either war or merchant vessels. "
A baby born to Mexican nationals within the United States is automatically a Mexican
citizen. Under the anchor baby reasoning, this baby acquires US citizenship at the same time
and so is a dual citizen. Mexican citizenship is primary because it stems from a primary
source, the parents' citizenship and the law of Mexico. The Mexican Constitution states the
child of Mexican parents is automatically a Mexican citizen at birth no matter where the
birth occurs. Since the child would be a Mexican citizen in any country, and becomes an
American citizen only if born in America, it is clear that Mexico has the primary claim of
citizenry on the child. This alone should be enough to satisfy the Fourteenth Amendment
jurisdiction thereof argument. Since Mexican citizenship is primary, it has primary
jurisdiction; thus by the plain words of the Fourteenth such child is not an American citizen
at birth.
[MORE]
There is a second argument for primary Mexican citizenship in the case of anchor babies.
Citizenship, whether Mexican or American, establishes rights and duties. Citizenship is a
reciprocal relationship, thus establishing jurisdiction. This case for primary Mexican
citizenship is supported by the fact that Mexico allows and encourages Mexicans resident in
the US, either illegal aliens or legal residents, to vote in Mexican elections. They are
counted as Mexican citizens abroad, even if dual citizens, and their government provides
widespread consular services as well as voting access to Mexicans residing in the US. As far
as Mexico is concerned, these persons are not Mexican in name only, but have a civil
relationship strong enough to allow a political voice; in essence, full citizenship. Clearly,
all this is the expression of typical reciprocal civic relationships expressed in legal
citizenship, further supporting the establishment of jurisdiction.
Part Two: Wong Kim Ark (1898) case. (Birthright Citizenship)
The Wong Kim Ark (WKA) case is often cited as the essential legal reasoning and precedent
for application of the fourteenth amendment as applied to aliens. There has been plenty of
commentary on WKA, but the truly narrow application of the case is emphasized reviewing a
concise statement of the question the case was meant to decide, written by Hon. Horace Gray,
Justice for the majority in this decision.
"[W]hether a child born in the United States, of parents of Chinese descent, who, at the
time of his birth, are subjects of the Emperor of China, but have a permanent domicile and
residence in the United States, and are there carrying on business, and are not employed in
any diplomatic or official capacity under the Emperor of China, becomes at the time of his
birth a citizen of the United States by virtue of the first clause of the Fourteenth
Amendment of the Constitution." (Italics added.)
For WKA to justify birthright citizenship, the parents must have " permanent domicile and
residence " But how can an illegal alien have permanent residence when the threat of
deportation is constantly present? There is no statute of limitation for illegal presence in
the US and the passage of time does not eliminate the legal remedy of deportation. This alone
would seem to invalidate WKA as a support and precedent for illegal alien birthright
citizenship.
If illegal (or legal) alien parents are unemployed, unemployable, illegally employed, or
if they get their living by illegal means, then they are not ". . .carrying on business. .
.", and so the children of indigent or criminal aliens may not be eligible for birthright
citizenship
If legal aliens meet the two tests provided in WKA, birthright citizenship applies.
Clearly the WKA case addresses the specific situation of the children of legal aliens, and so
is not an applicable precedent to justify birthright citizenship for the children of illegal
aliens.
Part three. Birth Tourism
Occasionally foreign couples take a trip to the US during the last phase of the wife's
pregnancy so she can give birth in the US, thus conferring birthright citizenship on the
child. This practice is called "birth tourism." WKA provides two tests for birthright
citizenship: permanent domicile and residence and doing business, and a temporary visit
answers neither condition. WKA is therefore disqualified as justification for a "birth
tourism" child to be granted birthright citizenship.
@Carroll Price Unfortunately, that train left the station long ago. With or without
welfare, there's simply no way soft, spoiled, lazy, over-indulged Americans who have never
hit a lick at anything their life, will ever perform manual labor for anyone, including
themselves. Then let them starve to death. The Pilgrims nipped that dumb ass idea (welfare)
in the bud
An equally pressing problem, though, is the millions of foreigners who are living and
working here, and using our schools and hospitals and public services, who should not be
here.
Has it ever occurred to anyone other than me that the cost associated with
foreign workers using our schools and hospitals and pubic services for free, is more than
off-set by the cheap price being paid for grocery store items like boneless chicken breast,
grapes, apples, peaches, lettuce etc, which would otherwise be prohibitively expensive even
for the wealthy?
Let alone relatively poor people (like myself) and those on fixed incomes? What
un-thinking Americans want, is having their cake and eating it too. Well, in the real world,
things just don't work that way. It's pay me now or pay me later. Once all the undocumented
workers who are doing all the dirty, nasty jobs Americans refuse to do are run out the
country, then what? Please look up;History; United States; pre mid-twentieth century. I'm
pretty sure Americans were eating chicken, grapes, apples, peaches, lettuce, etc. prior to
that period. I don't think their diet consisted of venison and tree bark.
But since I wasn't there, maybe I'm wrong and that is actually what they were eating.
I know some people born in the 1920′s; I'll check with them and let you know what they
say.
To enable automatic user logout, we will be using the TMOUT shell variable,
which terminates a user's login shell in case there is no activity for a given number of
seconds that you can specify.
To enable this globally (system-wide for all users), set the above variable in the
/etc/profile shell initialization file.
Look like technologically this is a questionable approach although technical details are
unclear. Rsync is better done by other tools and BTRFS is a niche filesystem.
TimeShift is a system restore tool for Linux. It provides functionality that is quite similar
to the System Restore feature in Windows or the Time Machine tool in MacOS. TimeShift protects
your system by making incremental snapshots of the file system manually or at regular automated
intervals.
These snapshots can then be restored at a later point to undo all changes to the system and
restore it to the previous state. Snapshots are made using rsync and hard-links and the tool
shares common files amongst snapshots in order to save disk space. Now that we have an idea about
what Timeshift is, let us take take a detail look at setting up and using this tool.
... ... ...
Timeshift supports 2 snapshot formats. The first is by using Rsync and the
second is by using the in-built features of BTRFS file system that allows snapshots to be
created. So you can select the BTRFS format if you are using that particular filesystem. Other
than that, you have to choose the Rsync format.
"... That's Silicon Valley's dirty secret. Most tech workers in Palo Alto make about as much as the high school teachers who teach their kids. And these are the top coders in the country! ..."
"... I don't see why more Americans would want to be coders. These companies want to drive down wages for workers here and then also ship jobs offshore... ..."
"... Silicon Valley companies have placed lowering wages and flooding the labor market with cheaper labor near the top of their goals and as a business model. ..."
"... There are quite a few highly qualified American software engineers who lose their jobs to foreign engineers who will work for much lower salaries and benefits. This is a major ingredient of the libertarian virus that has engulfed and contaminating the Valley, going hand to hand with assembling products in China by slave labor ..."
"... If you want a high tech executive to suffer a stroke, mention the words "labor unions". ..."
"... India isn't being hired for the quality, they're being hired for cheap labor. ..."
"... Enough people have had their hands burnt by now with shit companies like TCS (Tata) that they are starting to look closer to home again... ..."
"... Globalisation is the reason, and trying to force wages up in one country simply moves the jobs elsewhere. The only way I can think of to limit this happening is to keep the company and coders working at the cutting edge of technology. ..."
"... I'd be much more impressed if I saw that the hordes of young male engineers here in SF expressing a semblance of basic common sense, basic self awareness and basic life skills. I'd say 91.3% are oblivious, idiotic children. ..."
"... Not maybe. Too late. American corporations objective is to low ball wages here in US. In India they spoon feed these pupils with affordable cutting edge IT training for next to nothing ruppees. These pupils then exaggerate their CVs and ship them out en mass to the western world to dominate the IT industry. I've seen it with my own eyes in action. Those in charge will anything/everything to maintain their grip on power. No brag. Just fact. ..."
That's Silicon Valley's dirty secret. Most tech workers in Palo Alto make about as much as
the high school teachers who teach their kids. And these are the top coders in the country!
Automated coding just pushes the level of coding further up the development food chain, rather
than gets rid of it. It is the wrong approach for current tech. AI that is smart enough to model
new problems and create their own descriptive and runnable language - hopefully after my lifetime
but coming sometime.
What coding does not teach is how to improve our non-code infrastructure and how to keep it running
(that's the stuff which actually moves things). Code can optimize stuff, but it needs actual actuators
to affect reality.
Sometimes these actuators are actual people walking on top of a roof while fixing it.
Silicon Valley companies have placed lowering wages and flooding the labor market with cheaper
labor near the top of their goals and as a business model.
There are quite a few highly qualified American software engineers who lose their jobs
to foreign engineers who will work for much lower salaries and benefits. This is a major ingredient
of the libertarian virus that has engulfed and contaminating the Valley, going hand to hand with
assembling products in China by slave labor .
If you want a high tech executive to suffer a stroke, mention the words "labor unions".
Nope. Married to a highly-technical skillset, you can still make big bucks. I say this as someone
involved in this kind of thing academically and our Masters grads have to beat the banks and fintech
companies away with dog shits on sticks. You're right that you can teach anyone to potter around
and throw up a webpage but at the prohibitively difficult maths-y end of the scale, someone suitably
qualified will never want for a job.
In a similar vein, if you accept the argument that it does drive down wages, wouldn't the culprit
actually be the multitudes of online and offline courses and tutorials available to an existing
workforce?
Funny you should pick medicine, law, engineering... 3 fields that are *not* taught in high school.
The writer is simply adding "coding" to your list. So it seems you agree with his "garbage" argument
after all.
Key word is "good". Teaching everyone is just going to increase the pool of programmers code I
need to fix. India isn't being hired for the quality, they're being hired for cheap labor.
As for women sure I wouldn't mind more women around but why does no one say their needs to be
more equality in garbage collection or plumbing? (And yes plumbers are a high paid professional).
In the end I don't care what the person is, I just want to hire and work with the best and
not someone I have to correct their work because they were hired by quota. If women only graduate
at 15% why should IT contain more than that? And let's be a bit honest with the facts, of those
15% how many spend their high school years staying up all night hacking? Very few. Now the few
that did are some of the better developers I work with but that pool isn't going to increase by
forcing every child to program... just like sports aren't better by making everyone take gym class.
I ran a development team for 10 years and I never had any trouble hiring programmers - we just
had to pay them enough. Every job would have at least 10 good applicants.
Two years ago I decided to scale back a bit and go into programming (I can code real-time low
latency financial apps in 4 languages) and I had four interviews in six months with stupidly low
salaries. I'm lucky in that I can bounce between tech and the business side so I got a decent
job out of tech.
My entirely anecdotal conclusion is that there is no shortage of good programmers just a shortage
of companies willing to pay them.
I've worn many hats so far, I started out as a started out as a sysadmin, then I moved on to web
development, then back end and now I'm doing test automation because I am on almost the same money
for half the effort.
But the concepts won't. Good programming requires the ability to break down a task, organise
the steps in performing it, identify parts of the process that are common or repetitive so they
can be bundled together, handed-off or delegated, etc.
These concepts can be applied to any programming language, and indeed to many non-software
activities.
Well to his point sort of... either everything will go php or all those entry level php developers
will be on the street. A good Java or C developer is hard to come by. And to the others, being
a being a developer, especially a good one, is nothing like reading and writing. The industry
is already saturated with poor coders just doing it for a paycheck.
Pretty much the entire history of the software industry since FORAST was developed for the
ORDVAC has been about desperately trying to make software development in some way possible without
driving everyone bonkers.
The gulf between FORAST and today's IDE-written, type-inferring high level languages, compilers,
abstracted run-time environments, hypervisors, multi-computer architectures and general tech-world
flavour-of-2017-ness is truly immense[1].
And yet software is still fucking hard to write. There's no sign it's getting easier despite
all that work.
Automated coding was promised as the solution in the 1980s as well. In fact, somewhere in my
archives, I've got paper journals which include adverts for automated systems that would programmers
completely redundant by writing all your database code for you. These days, we'd think of those
tools as automated ORM generators and they don't fix the problem; they just make a new one --
ORM impedance mismatch -- which needs more engineering on top to fix...
The tools don't change the need for the humans, they just change what's possible for the humans
to do.
[1] FORAST executed in about 20,000 bytes of memory without even an OS. The compile artifacts
for the map-reduce system I built today are an astonishing hundred million bytes... and don't
include the necessary mapreduce environment, management interface, node operating system and distributed
filesystem...
"There are already top quality coders in China and India"
AHAHAHAHAHAHAHAHAHAHAHA *rolls on the floor laughting* Yes........ 1%... and 99% of incredibly
bad, incompetent, untalented one that produce cost 50% of a good developer but produce only 5%
in comparison. And I'm talking with a LOT of practical experience through more than a dozen corporations
all over the world which have been outsourcing to India... all have been disasters for the companies
(but good for the execs who pocketed big bonuses and left the company before the disaster blows
up in their face)
Tech executives have pursued [the goal of suppressing workers' compensation] in a variety
of ways. One is collusion – companies conspiring to prevent their employees from earning more
by switching jobs. The prevalence of this practice in Silicon Valley triggered a justice department
antitrust complaint in 2010, along with a class action suit that culminated in a $415m settlement.
Folks interested in the story of the Techtopus (less drily presented than in the links in this
article) should check out Mark Ames' reporting, esp
this overview article and
this focus on the egregious Steve Jobs (whose canonization by the US corporate-funded media
is just one more impeachment of their moral bankruptcy).
Another, more sophisticated method is importing large numbers of skilled guest workers from
other countries through the H1-B visa program. These workers earn less than their American
counterparts, and possess little bargaining power because they must remain employed to keep
their status.
I have watched as schools run by trade unions have done the opposite for the 5 decades. By limiting
the number of graduates, they were able to help maintain living wages and benefits. This has been
stopped in my area due to the pressure of owners run "trade associations".
During that same time period I have witnessed trade associations controlled by company owners,
while publicising their support of the average employee, invest enormous amounts of membership
fees in creating alliances with public institutions. Their goal has been that of flooding the
labor market and thus keeping wages low. A double hit for the average worker because membership
fees were paid by employees as well as those in control.
Coding jobs are just as susceptible to being moved to lower cost areas of the world as hardware
jobs already have. It's already happening. There are already top quality coders in China and India.
There is a much larger pool to chose from and they are just as good as their western counterparts
and work harder for much less money.
Globalisation is the reason, and trying to force wages up in one country simply moves the
jobs elsewhere. The only way I can think of to limit this happening is to keep the company and
coders working at the cutting edge of technology.
I'd be much more impressed if I saw that the hordes of young male engineers here in SF
expressing a semblance of basic common sense, basic self awareness and basic life skills. I'd
say 91.3% are oblivious, idiotic children.
They would definitely not survive the zombie apocalypse.
P.S. not every kid wants or needs to have their soul sucked out of them sitting in front of
a screen full of code for some idiotic service that some other douchbro thinks is the next iteration
of sliced bread.
The demonization of Silicon Valley is clearly the next place to put all blame. Look what
"they" did to us: computers, smart phones, HD television, world-wide internet, on and on. Get
a rope!
I moved there in 1978 and watched the orchards and trailer parks on North 1st St. of San
Jose transform into a concrete jungle. There used to be quite a bit of semiconductor
equipment and device manufacturing in SV during the 80s and 90s. Now quite a few buildings
have the same name : AVAILABLE. Most equipment and device manufacturing has moved to
Asia.
Programming started with binary, then machine code (hexadecimal or octal) and moved to
assembler as a compiled and linked structure. More compiled languages like FORTRAN, BASIC,
PL-1, COBOL, PASCAL, C (and all its "+'s") followed making programming easier for the less
talented. Now the script based languages (HTML, JAVA, etc.) are even higher level and
accessible to nearly all. Programming has become a commodity and will be priced like milk,
wheat, corn, non-unionized workers and the like. The ship has sailed on this activity as a
career.
Hi: As I have said many times before, there is no shortage of people who fully understand the
problem and can see all the connections.
However, they all fall on their faces when it comes to the solution.
To cut to the chase, Concentrated Wealth needs to go, permanently.
Of course the challenge is how to best accomplish this.....
Damn engineers and their black and white world view, if they weren't so inept they would've
unionized instead of being trampled again and again in the name of capitalism.
Not maybe. Too late. American corporations objective is to low ball wages here in US. In
India they spoon feed these pupils with affordable cutting edge IT training for next to
nothing ruppees. These pupils then exaggerate their CVs and ship them out en mass to the
western world to dominate the IT industry. I've seen it with my own eyes in action. Those in
charge will anything/everything to maintain their grip on power. No brag. Just fact.
Wrong again, that approach has been tried since the 80s and will keep failing only because
software development is still more akin to a technical craft than an engineering discipline.
The number of elements required to assemble a working non trivial system is way beyond
scriptable.
> That's some crystal ball you have there. English teachers will need to know how to
code? Same with plumbers? Same with janitors, CEOs, and anyone working in the service
industry?
You don't believe there will be robots to do plumbing and cleaning? The cleaner's job will
be to program robots to do what they need.
CEOs? Absolutely.
English teachers? Both of my kids have school laptops and everything is being done on the
computers. The teachers use software and create websites and what not. Yes, even English
teachers.
Not knowing / understanding how to code will be the same as not knowing how to use Word/
Excel. I am assuming there are people who don't, but I don't know any above the age of 6.
We've had 'automated coding scripts' for years for small tasks. However, anyone who says
they're going to obviate programmers, analysts and designers doesn't understand the software
development process.
Even if expert systems (an 80's concept, BTW) could code, we'd still have a huge need for
managers. The hard part of software isn't even the coding. It's determining the requirements
and working with clients. It will require general intelligence to do 90% of what we do right
now. The 10% we could automate right now, mostly gets in the way. I agree it will change, but
it's going to take another 20-30 years to really happen.
wrong, software companies are already developing automated coding scripts. You'll get a bunch
of door to door knives salespeople once the dust settles that's what you'll get.
Thw user "imipak" views are pretty common misconceptions. They are all wrong.
Notable quotes:
"... I was about to take offence on behalf of programmers, but then I realized that would be snobbish and insulting to carpenters too. Many people can code, but only a few can code well, and fewer still become the masters of the profession. Many people can learn carpentry, but few become joiners, and fewer still become cabinetmakers. ..."
"... Many people can write, but few become journalists, and fewer still become real authors. ..."
Coding has little or nothing to do with Silicon Valley. They may or may not have ulterior
motives, but ultimately they are nothing in the scheme of things.
I disagree with teaching coding as a discrete subject. I think it should be combined with
home economics and woodworking because 90% of these subjects consist of transferable skills
that exist in all of them. Only a tiny residual is actually topic-specific.
In the case of coding, the residual consists of drawing skills and typing skills.
Programming language skills? Irrelevant. You should choose the tools to fit the problem.
Neither of these needs a computer. You should only ever approach the computer at the very
end, after you've designed and written the program.
Is cooking so very different? Do you decide on the ingredients before or after you start?
Do you go shopping half-way through cooking an omelette?
With woodwork, do you measure first or cut first? Do you have a plan or do you randomly
assemble bits until it does something useful?
Real coding, taught correctly, is barely taught at all. You teach the transferable skills.
ONCE. You then apply those skills in each area in which they apply.
What other transferable skills apply? Top-down design, bottom-up implementation. The
correct methodology in all forms of engineering. Proper testing strategies, also common
across all forms of engineering. However, since these tests are against logic, they're a test
of reasoning. A good thing to have in the sciences and philosophy.
Technical writing is the art of explaining things to idiots. Whether you're designing a
board game, explaining what you like about a house, writing a travelogue or just seeing if
your wild ideas hold water, you need to be able to put those ideas down on paper in a way
that exposes all the inconsistencies and errors. It doesn't take much to clean it up to be
readable by humans. But once it is cleaned up, it'll remain free of errors.
So I would teach a foundation course that teaches top-down reasoning, bottom-up design,
flowcharts, critical path analysis and symbolic logic. Probably aimed at age 7. But I'd not
do so wholly in the abstract. I'd have it thoroughly mixed in with one field, probably
cooking as most kids do that and it lacks stigma at that age.
I'd then build courses on various crafts and engineering subjects on top of that, building
further hierarchies where possible. Eliminate duplication and severely reduce the fictions we
call disciplines.
I used to employ 200 computer scientists in my business and now teach children so I'm
apparently as guilty as hell. To be compared with a carpenter is, however, a true compliment, if you mean those that
create elegant, aesthetically-pleasing, functional, adaptable and long-lasting bespoke
furniture, because our crafts of problem-solving using limited resources in confined
environments to create working, life-improving artifacts both exemplify great human ingenuity
in action. Capitalism or no.
"But coding is not magic. It is a technical skill, akin to carpentry."
But some people do it much better than others. Just like journalism. This article is
complete nonsense, as I discuss in another comment. The author might want to consider a
career in carpentry.
"But coding is not magic. It is a technical skill, akin to carpentry."
I was about to take offence on behalf of programmers, but then I realized that would be
snobbish and insulting to carpenters too. Many people can code, but only a few can code well,
and fewer still become the masters of the profession. Many people can learn carpentry, but
few become joiners, and fewer still become cabinetmakers.
Many people can write, but few become journalists, and fewer still become real authors.
"... You can learn to code, but that doesn't mean you'll be good at it. There will be a few who excel but most will not. This isn't a reflection on them but rather the reality of the situation. In any given area some will do poorly, more will do fairly, and a few will excel. The same applies in any field. ..."
"... Oh no, there's loads of people who say they're coders, who have on their CV that they're coders, that have been paid to be coders. Loads of them. Amazingly, about 9 out of 10 of them, experienced coders all, spent ages doing it, not a problem to do it, definitely a coder, not a problem being "hands on"... can't actually write working code when we actually ask them to. ..."
"... I feel for your brother, and I've experienced the exact same BS "test" that you're describing. However, when I said "rudimentary coding exam", I wasn't talking about classic fiz-buz questions, Fibonacci problems, whiteboard tests, or anything of the sort. We simply ask people to write a small amount of code that will solve a simple real world problem. Something that they would be asked to do if they got hired. We let them take a long time to do it. We let them use Google to look things up if they need. You would be shocked how many "qualified applicants" can't do it. ..."
"... "...coding is not magic. It is a technical skill, akin to carpentry. " I think that is a severe underestimation of the level of expertise required to conceptualise and deliver robust and maintainable code. The complexity of integrating software is more equivalent to constructing an entire building with components of different materials. If you think teaching coding is enough to enable software design and delivery then good luck. ..."
"... Being able to write code and being able to program are two very different skills. In language terms its the difference between being able to read and write (say) English and being able to write literature; obviously you need a grasp of the language to write literature but just knowing the language is not the same as being able to assemble and marshal thought into a coherent pattern prior to setting it down. ..."
"... What a dumpster argument. I am not a programmer or even close, but a basic understanding of coding has been important to my professional life. Coding isn't just about writing software. Understanding how algorithms work, even simple ones, is a general skill on par with algebra. ..."
"... Never mind that a good education is clearly one of the most important things you can do for a person to improve their quality of life wherever they live in the world. It's "neoliberal," so we better hate it. ..."
"... A lot of resumes come across my desk that look qualified on paper, but that's not the same thing as being able to do the job. Secondarily, while I agree that one day our field might be replaced by automation, there's a level of creativity involved with good software engineering that makes your carpenter comparison a bit flawed. ..."
Agreed, to many people 'coding' consists of copying other people's JavaScript snippets from
StackOverflow... I tire of the many frauds in the business...
You can learn to code, but that doesn't mean you'll be good at it. There will be a few who
excel but most will not. This isn't a reflection on them but rather the reality of the
situation. In any given area some will do poorly, more will do fairly, and a few will excel.
The same applies in any field.
Oh, rubbish. I'm in the process of retiring from my job as an Android software designer so
I'm tasked with hiring a replacement for my organisation. It pays extremely well, the work is
interesting, and the company is successful and serves an important worldwide industry.
Still, finding highly-qualified people is hard and they get snatched up in mid-interview
because the demand is high. Not only that but at these pay scales, we can pretty much expect
the Guardian will do yet another article about the unconscionable gap between what rich,
privileged techies like software engineers make and everyone else.
Really, we're damned if we do and damned if we don't. If tech workers are well-paid we're
castigated for gentrifying neighbourhoods and living large, and yet anything that threatens
to lower what we're paid produces conspiracy-theory articles like this one.
I learned to cook in school. Was there a shortage of cooks? No. Did I become a professional
cook? No. but I sure as hell would not have missed the skills I learned for the world, and I
use them every day.
Oh no, there's loads of people who say they're coders, who have on their CV that they're
coders, that have been paid to be coders. Loads of them.
Amazingly, about 9 out of 10 of them, experienced coders all, spent ages doing it, not a
problem to do it, definitely a coder, not a problem being "hands on"... can't actually
write working code when we actually ask them to.
I feel for your brother, and I've experienced the exact same BS "test" that you're
describing. However, when I said "rudimentary coding exam", I wasn't talking about classic
fiz-buz questions, Fibonacci problems, whiteboard tests, or anything of the sort. We simply
ask people to write a small amount of code that will solve a simple real world problem.
Something that they would be asked to do if they got hired. We let them take a long time to
do it. We let them use Google to look things up if they need. You would be shocked how many
"qualified applicants" can't do it.
The demonization of Silicon Valley is clearly the next place to put all blame. Look what
"they" did to us: computers, smart phones, HD television, world-wide internet, on and on. Get
a rope!
I moved there in 1978 and watched the orchards and trailer parks on North 1st St. of San
Jose transform into a concrete jungle. There used to be quite a bit of semiconductor
equipment and device manufacturing in SV during the 80s and 90s. Now quite a few buildings
have the same name : AVAILABLE. Most equipment and device manufacturing has moved to
Asia.
Programming started with binary, then machine code (hexadecimal or octal) and moved to
assembler as a compiled and linked structure. More compiled languages like FORTRAN, BASIC,
PL-1, COBOL, PASCAL, C (and all its "+'s") followed making programming easier for the less
talented.
Now the script based languages (HTML, JAVA, etc.) are even higher level and
accessible to nearly all. Programming has become a commodity and will be priced like milk,
wheat, corn, non-unionized workers and the like. The ship has sailed on this activity as a
career.
"intelligence, creativity, diligence, communication ability, or anything else that a job"
None of those are any use if, when asked to turn your intelligent, creative, diligent,
communicated idea into some software, you perform as well as most candidates do at simple
coding assessments... and write stuff that doesn't work.
At its root, the campaign for code education isn't about giving the next generation a
shot at earning the salary of a Facebook engineer. It's about ensuring those salaries no
longer exist, by creating a source of cheap labor for the tech industry.
Of course the writer does not offer the slightest shred of evidence to support the idea
that this is the actual goal of these programs. So it appears that the tinfoil-hat
conspiracy brigade on the Guardian is operating not only below the line, but above it,
too.
The fact is that few of these students will ever become software engineers (which,
incidentally, is my profession) but programming skills are essential in many professions for
writing little scripts to automate various tasks, or to just understand 21st century
technology.
Sadly this is another article by a partial journalist who knows nothing about the software
industry, but hopes to subvert what he had read somewhere to support a position he had
already assumed. As others had said, understanding coding had already become akin to being able to use a
pencil. It is a basic requirement of many higher level roles.
But knowing which end of a pencil to put on the paper (the equivalent of the level of
coding taught in schools) isn't the same as being an artist. Moreover anyone who knows the field recognises that top coders are gifted, they embody
genius. There are coding Caravaggio's out there, but few have the experience to know that. No
amount of teaching will produce high level coders from average humans, there is an intangible
something needed, as there is in music and art, to elevate the merely good to genius.
All to say, however many are taught the basics, it won't push down the value of the most
talented coders, and so won't reduce the costs of the technology industry in any meaningful
way as it is an industry, like art, that relies on the few not the many.
Not all of those children will want to become programmers but at least the barrier to
entry,
- for more to at least experience it - will be lower.
Teaching music to only the children whose parents can afford music tuition means than
society misses out on a greater potential for some incredible gifted musicians to shine
through.
Moreover, learning to code really means learning how to wrangle with the practical
application of abstract concepts, algorithms, numerical skills, logic, reasoning, etc. which
are all transferrable skills some of which are not in the scope of other classes, certainly
practically.
Like music, sport, literature etc. programming a computer, a website, a device, a smartphone
is an endeavour that can be truly rewarding as merely a pastime, and similarly is limited
only by ones imagination.
"...coding is not magic. It is a technical skill, akin to carpentry. " I think that is a
severe underestimation of the level of expertise required to conceptualise and deliver robust
and maintainable code. The complexity of integrating software is more equivalent to
constructing an entire building with components of different materials. If you think teaching
coding is enough to enable software design and delivery then good luck.
Yeah, but mania over coding skills inevitably pushes over skills out of the curriculum (or
deemphasizes it). Education is zero-sum in that there's only so much time and energy to
devote to it. Hence, you need more than vague appeals to "enhancement," especially given the
risks pointed out by the author.
"Talented coders will start new tech businesses and create more jobs."
That could be argued for any skill set, including those found in the humanities and social
sciences likely to pushed out by the mania over coding ability. Education is zero-sum: Time
spent on one subject is time that invariably can't be spent learning something else.
"If they can't literally fix everything let's just get rid of them, right?"
That's a strawman. His point is rooted in the recognition that we only have so much time,
energy, and money to invest in solutions. One's that feel good but may not do anything
distract us for the deeper structural issues in our economy. The probably with thinking
"education" will fix everything is that it leaves the status quo unquestioned.
Being able to write code and being able to program are two very different skills. In language
terms its the difference between being able to read and write (say) English and being able to
write literature; obviously you need a grasp of the language to write literature but just
knowing the language is not the same as being able to assemble and marshal thought into a
coherent pattern prior to setting it down.
To confuse things further there's various levels of skill that all look the same to the
untutored eye. Suppose you wished to bridge a waterway. If that waterway was a narrow ditch
then you could just throw a plank across. As the distance to be spanned got larger and larger
eventually you'd have to abandon intuition for engineering and experience. Exactly the same
issues happen with software but they're less tangible; anyone can build a small program but a
complex system requires a lot of other knowledge (in my field, that's engineering knowledge
-- coding is almost an afterthought).
Its a good idea to teach young people to code but I wouldn't raise their expectations of
huge salaries too much. For children educating them in wider, more general, fields and
abstract activities such as music will pay off huge dividends, far more than just teaching
them whatever the fashionable language du jour is. (...which should be Logo but its too
subtle and abstract, it doesn't look "real world" enough!).
I don't see this is an issue. Sure, there could be ulterior motives there, but anyone who
wants to still be employed in 20 years has to know how to code . It is not that everyone will
be a coder, but their jobs will either include part-time coding or will require understanding
of software and what it can and cannot do. AI is going to be everywhere.
What a dumpster argument. I am not a programmer or even close, but a basic understanding of
coding has been important to my professional life. Coding isn't just about writing software.
Understanding how algorithms work, even simple ones, is a general skill on par with algebra.
But is isn't just about coding for Tarnoff. He seems to hold education in contempt
generally. "The far-fetched premise of neoliberal school reform is that education can mend
our disintegrating social fabric." If they can't literally fix everything let's just get rid
of them, right?
Never mind that a good education is clearly one of the most important things
you can do for a person to improve their quality of life wherever they live in the world.
It's "neoliberal," so we better hate it.
I'm not going to argue that the goal of mass education isn't to drive down wages, but the
idea that the skills gap is a myth doesn't hold water in my experience. I'm a software
engineer and manager at a company that pays well over the national average, with great
benefits, and it is downright difficult to find a qualified applicant who can pass a
rudimentary coding exam.
A lot of resumes come across my desk that look qualified on paper,
but that's not the same thing as being able to do the job. Secondarily, while I agree that
one day our field might be replaced by automation, there's a level of creativity involved
with good software engineering that makes your carpenter comparison a bit flawed.
"... I do think it's peculiar that Silicon Valley requires so many H1B visas... 'we can't find the
talent here' is the main excuse ..."
"... This is interesting. Indeed, I do think there is excess supply of software programmers. ..."
"... Well, it is either that or the kids themselves who have to pay for it and they are even less
prepared to do so. Ideally, college education should be tax payer paid but this is not the case in the
US. And the employer ideally should pay for the job related training, but again, it is not the case
in the US. ..."
"... Plenty of people care about the arts but people can't survive on what the arts pay. That was
pretty much the case all through human history. ..."
"... I was laid off at your age in the depths of the recent recession and I got a job. ..."
"... The great thing about software , as opposed to many other jobs, is that it can be done at home
which you're laid off. Write mobile (IOS or Android) apps or work on open source projects and get stuff
up on github. I've been to many job interviews with my apps loaded on mobile devices so I could show
them what I've done. ..."
"... Schools really can't win. Don't teach coding, and you're raising a generation of button-pushers.
Teach it, and you're pandering to employers looking for cheap labour. Unions in London objected to children
being taught carpentry in the twenties and thirties, so it had to be renamed "manual instruction" to
get round it. Denying children useful skills is indefensible. ..."
I do think it's peculiar that Silicon Valley requires so many H1B visas... 'we can't find
the talent here' is the main excuse, though many 'older' (read: over 40) native-born tech
workers will tell your that's plenty of talent here already, but even with the immigration hassles,
H1B workers will be cheaper overall...
This is interesting. Indeed, I do think there is excess supply of software programmers.
There is only a modest number of decent jobs, say as an algorithms developer in finance,
general architecture of complex systems or to some extent in systems security. However, these
jobs are usually occupied and the incumbents are not likely to move on quickly. Road blocks are
also put up by creating sub networks of engineers who ensure that some knowledge is not ubiquitous.
Most very high paying jobs in the technology sector are in the same standard upper management
roles as in every other industry.
Still, the ability to write a computer program in an enabler, knowing how it works means you
have an ability to imagine something and make it real. To me it is a bit like language, some people
can use language to make more money than others, but it is still important to be able to have
a basic level of understanding.
And yet I know a lot of people that has happened to. Better to replace a $125K a year programmer
with one who will do the same, or even less, job for $50K.
This could backfire if the programmers don't find the work or pay to match their expectations...
Programmers, after all tend to make very good hackers if their minds are turned to it.
While I like your idea of what designing a computer program involves, in my nearly 40
years experience as a programmer I have rarely seen this done.
How else can you do it?
Java is popular because it's a very versatile language - On this list it's the most popular
general-purpose programming language. (Above it javascript is just a scripting language and HTML/CSS
aren't even programming languages)
https://fossbytes.com/most-used-popular-programming-languages/
... and below it you have to go down to C# at 20% to come to another general-purpose language,
and even that's a Microsoft house language.
Also the "correct" choice of programming languages is also based on how many people in the
shop know it so they maintain code that's written in it by someone else.
> job-specific training is completely different. What a joke to persuade public school districts
to pick up the tab on job training.
Well, it is either that or the kids themselves who have to pay for it and they are even
less prepared to do so. Ideally, college education should be tax payer paid but this is not the
case in the US. And the employer ideally should pay for the job related training, but again, it
is not the case in the US.
> The bigger problem is that nobody cares about the arts, and as expensive as education
is, nobody wants to carry around a debt on a skill that won't bring in the buck
Plenty of people care about the arts but people can't survive on what the arts pay. That
was pretty much the case all through human history.
Since newspaper are consolidating and cutting jobs gotta clamp down on colleges offering BA degrees,
particularly in English Literature and journalism.
This article focuses on the US schools, but I can imagine it's the same in the UK. I don't think
these courses are going to be about creating great programmers capable of new innovations as much
as having a work force that can be their own IT Help Desk.
They'll learn just enough in these classes to do that.
Then most companies will be hiring for other jobs, but want to make sure you have the IT skills
to serve as your own "help desk" (although they will get no salary for their IT work).
I find that quite remarkable - 40 years ago you must have been using assembler and with hardly
any memory to work with. If you blitzed through that without applying the thought processes described,
well...I'm surprised.
I was laid off at your age in the depths of the recent recession and I got a job. As
I said in another posting, it usually comes down to fresh skills and good personal references
who will vouch for your work-habits and how well you get on with other members of your team.
The great thing about software , as opposed to many other jobs, is that it can be done
at home which you're laid off. Write mobile (IOS or Android) apps or work on open source projects
and get stuff up on github. I've been to many job interviews with my apps loaded on mobile devices
so I could show them what I've done.
The situation has a direct comparison to today. It has nothing to do with land. There was a certain
amount of profit making work and not enough labour to satisfy demand. There is currently a certain
amount of profit making work and in many situations (especially unskilled low paid work) too much
labour.
So, is teaching people English or arithmetic all about reducing wages for the literate and numerate?
Or is this the most obtuse argument yet for avoiding what everyone in tech knows - even more
blatantly than in many other industries, wages are curtailed by offshoring; and in the US, by
having offshoring centres on US soil.
Well, speaking as someone who spends a lot of time trying to find really good programmers... frankly
there aren't that many about. We take most of ours from Eastern Europe and SE Asia, which is quite
expensive, given the relocation costs to the UK. But worth it.
So, yes, if more British kids learnt about coding, it might help a bit. But not much; the real
problem is that few kids want to study IT in the first place, and that the tuition standards in
most UK universities are quite low, even if they get there.
Robots, or AI, are already making us more productive. I can write programs today in an afternoon
that would have taken me a week a decade or two ago.
I can create a class and the IDE will take care of all the accessors, dependencies, enforce
our style-guide compliance, stub-in the documentation ,even most test cases, etc, and all I have
to write is very-specific stuff required by my application - the other 90% is generated for me.
Same with UI/UX - stubs in relevant event handlers, bindings, dependencies, etc.
Programmers are a zillion times more productive than in the past, yet the demand keeps growing
because so much more stuff in our lives has processors and code. Your car has dozens of processors
running lots of software; your TV, your home appliances, your watch, etc.
Schools really can't win. Don't teach coding, and you're raising a generation of button-pushers.
Teach it, and you're pandering to employers looking for cheap labour. Unions in London objected
to children being taught carpentry in the twenties and thirties, so it had to be renamed "manual
instruction" to get round it. Denying children useful skills is indefensible.
Getting children to learn how to write code, as part of core education, will be the first step
to the long overdue revolution. The rest of us will still have to stick to burning buildings down
and stringing up the aristocracy.
did you misread? it seemed like he was emphasizing that learning to code, like learning art (and
sports and languages), will help them develop skills that benefit them in whatever profession
they choose.
While I like your idea of what designing a computer program involves, in my nearly 40 years experience
as a programmer I have rarely seen this done. And, FWIW, IMHO choosing the tool (programming language)
might reasonably be expected to follow designing a solution, in practice this rarely happens.
No, these days it's Java all the way, from day one.
I'd advise parents that the classes they need to make sure their kids excel in are acting/drama.
There is no better way to getting that promotion or increasing your pay like being a skilled actor
in the job market. It's a fake it till you make it deal.
This really has to be one of the silliest articles I read here in a very long time.
People, let your children learn to code. Even more, educate yourselves and start to code just
for the fun of it - look at it like a game.
The more people know how to code the less likely they are to understand how stuff works. If you
were ever frustrated by how impossible it seems to shop on certain websites, learn to code and
you will be frustrated no more. You will understand the intent behind the process.
Even more, you will understand the inherent limitations and what is the meaning of safety. You
will be able to better protect yourself in a real time connected world.
Learning to code won't turn your kid into a programmer, just like ballet or piano classes won't
mean they'll ever choose art as their livelihood. So let the children learn to code and learn
along with them
Tipping power to employers in any profession by oversupply of labour is not a good thing. Bit
of a macabre example here but...After the Black Death in the middle ages there was a huge under
supply of labour. It produced a consistent rise in wages and conditions and economic development
for hundreds of years after this. Not suggesting a massive depopulation. But you can achieve the
same effects by altering the power balance. With decades of Neoliberalism, the employers side
of the power see-saw is sitting firmly in the mud and is producing very undesired results for
the vast majority of people.
I am 59, and it is not just the age aspect it is the money aspect. They know you have experience
and expectations, and yet they believe hiring someone half the age and half the price, times 2
will replace your knowledge. I have been contracting in IT for 30 years, and now it is obvious
it is over. Experience at some point no longer mitigates age. I think I am at that point now.
Dear, dear, I know, I know, young people today . . . just not as good as we were. Everything is
just going down the loo . . . Just have a nice cuppa camomile (or chamomile if you're a Yank)
and try to relax ... " hey you kids, get offa my lawn !"
There are good reasons to teach coding. Too many of today's computer users are amazingly unaware
of the technology that allows them to send and receive emails, use their smart phones, and use
websites. Few understand the basic issues involved in computer security, especially as it relates
to their personal privacy. Hopefully some introductory computer classes could begin to remedy
this, and the younger the students the better.
Security problems are not strictly a matter of coding.
Security issues persist in tech. Clearly that is not a function of the size of the workforce.
I propose that it is a function of poor management and design skills. These are not taught in
any programming class I ever took. I learned these on the job and in an MBA program, and because
I was determined.
Don't confuse basic workforce training with an effective application of tech to authentic needs.
How can the "disruption" so prized in today's Big Tech do anything but aggravate our social
problems? Tech's disruption begins with a blatant ignorance of and disregard for causes, and believes
to its bones that a high tech app will truly solve a problem it cannot even describe.
indeed that idea has been around as long as cobol and in practice has just made things worse,
the fact that many people outside of software engineering don;t seem to realise is that the coding
itself is a relatively small part of the job
so how many female and old software engineers are there who are unable to get a job, i'm one of
them at 55 finding it impossible to get a job and unlike many 'developers' i know what i'm doing
Training more people for an occupation will result in more people becoming qualified to perform
that occupation, irregardless of the fact that many will perform poorly at it. A CS degree is
no guarantee of competency, but it is one of the best indicators of general qualification we have
at the moment. If you can provide a better metric for analyzing the underlying qualifications
of the labor force, I'd love to hear it.
Regarding your anecdote, while interesting, it poor evidence when compared to the aggregate
statistical data analyzed in the EPI study.
Good grief. It's not job-specific training. You sound like someone who knows nothing about
computer programming.
Designing a computer program requires analysing the task; breaking it down into its components,
prioritising them and identifying interdependencies, and figuring out which parts of it can be
broken out and done separately. Expressing all this in some programming language like Java, C,
or C++ is quite secondary.
So once you learn to organise a task properly you can apply it to anything - remodeling a house,
planning a vacation, repairing a car, starting a business, or administering a (non-software) project
at work.
"... Thank you. The kids that spend high school researching independently and spend their nights hacking just for the love of it and getting a job without college are some of the most competent I've ever worked with. Passionless college grads that just want a paycheck are some of the worst. ..."
"... how about how new labor tried to sign away IT access in England to India in exchange for banking access there, how about the huge loopholes in bringing in cheap IT workers from elsewhere in the world, not conspiracies, but facts ..."
"... And I've never recommended hiring anyone right out of school who could not point me to a project they did on their own, i.e., not just grades and test scores. I'd like to see an IOS or Android app, or a open-source component, or utility or program of theirs on GitHub, or something like that. ..."
"... most of what software designers do is not coding. It requires domain knowledge and that's where the "smart" IDEs and AI coding wizards fall down. It will be a long time before we get where you describe. ..."
Instant feedback is one of the things I really like about programming, but it's also the
thing that some people can't handle. As I'm developing a program all day long the compiler is
telling me about build errors or warnings or when I go to execute it it crashes or produces
unexpected output, etc. Software engineers are bombarded all day with negative feedback and
little failures. You have to be thick-skinned for this work.
How is it shallow and lazy? I'm hiring for the real world so I want to see some real world
accomplishments. If the candidate is fresh out of university they can't point to work
projects in industry because they don't have any. But they CAN point to stuff they've done on
their own. That shows both motivation and the ability to finish something. Why do you object
to it?
Thank you. The kids that spend high school researching independently and spend their nights
hacking just for the love of it and getting a job without college are some of the most
competent I've ever worked with. Passionless college grads that just want a paycheck are some
of the worst.
There is a big difference between "coding" and programming. Coding for a smart phone app is a
matter of calling functions that are built into the device. For example, there are functions
for the GPS or for creating buttons or for simulating motion in a game. These are what we
used to call subroutines. The difference is that whereas we had to write our own subroutines,
now they are just preprogrammed functions. How those functions are written is of little or no
importance to today's coders.
Nor are they able to program on that level. Real programming
requires not only a knowledge of programming languages, but also a knowledge of the underlying
algorithms that make up actual programs. I suspect that "coding" classes operate on a quite
superficial level.
Its not about the amount of work or the amount of labor. Its about the comparative
availability of both and how that affects the balance of power, and that in turn affects the
overall quality of life for the 'majority' of people.
Most of this is not true. Peter Nelson gets it right by talking about breaking steps down and
thinking rationally. The reason you can't just teach the theory, however, is that humans
learn much better with feedback. Think about trying to learn how to build a fast car, but you
never get in and test its speed. That would be silly. Programming languages take the system
of logic that has been developed for centuries and gives instant feedback on the results.
It's a language of rationality.
This article is about the US. The tech industry in the EU is entirely different, and
basically moribund. Where is the EU's Microsoft, Apple, Google, Amazon, Oracle, Intel,
Facebook, etc, etc? The opportunities for exciting interesting work, plus the time and
schedule pressures that force companies to overlook stuff like age because they need a
particular skill Right Now, don't exist in the EU. I've done very well as a software engineer
in my 60's in the US; I cannot imagine that would be the case in the EU.
sorry but that's just not true, i doubt you are really programming still, or quasi programmer
but really a manager who like to keep their hand in, you certainly aren't busy as you've been
posting all over this cif. also why would you try and hire someone with such disparate
skillsets, makes no sense at all
oh and you'd be correct that i do have workplace issues, ie i have a disability and i also
suffer from depression, but that shouldn't bar me from employment and again regarding my
skills going stale, that again contradicts your statement that it's about
planning/analysis/algorithms etc that you said above ( which to some extent i agree with
)
Not at all, it's really egalitarian. If I want to hire someone to paint my portrait, the best
way to know if they're any good is to see their previous work. If they've never painted a
portrait before then I may want to go with the girl who has
Right? It's ridiculous. "Hey, there's this industry you can train for that is super valuable
to society and pays really well!"
Then Ben Tarnoff, "Don't do it! If you do you'll drive down wages for everyone else in the
industry. Build your fire starting and rock breaking skills instead."
how about how new labor tried to sign away IT access in England to India in exchange for
banking access there, how about the huge loopholes in bringing in cheap IT workers from
elsewhere in the world, not conspiracies, but facts
I think the difference between gifted and not is motivation. But I agree it's not innate. The
kid who stayed up all night in high school hacking into the school server to fake his coding
class grade is probably more gifted than the one who spent 4 years in college getting a BS in
CS because someone told him he could get a job when he got out.
I've done some hiring in my life and I always ask them to tell me about stuff they did on
their own.
As several people have pointed out, writing a computer program requires analyzing and
breaking down a task into steps, identifying interdependencies, prioritizing the order,
figuring out what parts can be organized into separate tasks that be done separately, etc.
These are completely independent of the language - I've been programming for 40 years in
everything from FORTRAN to APL to C to C# to Java and it's all the same. Not only that but
they transcend programming - they apply to planning a vacation, remodeling a house, or fixing
a car.
Neither coding nor having a bachelor's degree in computer science makes you a suitable job
candidate. I've done a lot of recruiting and interviews in my life, and right now I'm trying
to hire someone. And I've never recommended hiring anyone right out of school who
could not point me to a project they did on their own, i.e., not just grades and test scores.
I'd like to see an IOS or Android app, or a open-source component, or utility or program of
theirs on GitHub, or something like that.
That's the thing that distinguishes software from many other fields - you can do something
real and significant on your own. If you haven't managed to do so in 4 years of college
you're not a good candidate.
Within the next year coding will be old news and you will simply be able to describe
things in ur native language in such a way that the machine will be able to execute any set
of instructions you give it.
In a sense that's already true, as i noted elsewhere. 90% of the code in my projects (Java
and C# in their respective IDEs) is machine generated. I do relatively little "coding". But
the flaw in your idea is this: most of what software designers do is not coding. It requires
domain knowledge and that's where the "smart" IDEs and AI coding wizards fall down. It will
be a long time before we get where you describe.
Completely agree. At the highest levels there is more work that goes into managing complexity and making
sure nothing is missed than in making the wheels turn and the beepers beep.
I've actually interviewed people for very senior technical positions in Investment Banks who
had all the fancy talk in the world and yet failed at some very basic "write me a piece of
code that does X" tests.
Next hurdle on is people who have learned how to deal with certain situations and yet
don't really understand how it works so are unable to figure it out if you change the problem
parameters.
That said, the average coder is only slightly beyond this point. The ones who can take in
account maintenability and flexibility for future enhancements when developing are already a
minority, and those who can understand the why of software development process steps, design
software system architectures or do a proper Technical Analysis are very rare.
Hubris.
It's easy to mistake efficiency born of experience as innate talent. The difference
between a 'gifted coder' and a 'non gifted junior coder' is much more likely to be 10 or 15
years sitting at a computer, less if there are good managers and mentors involved.
Politicians love the idea of teaching children to 'code', because it sounds so modern, and
nobody could possible object... could they? Unfortunately it simply shows up their utter
ignorance of technical matters because there isn't a language called 'coding'. Computer
programming languages have changed enormously over the years, and continue to evolve. If you
learn the wrong language you'll be about as welcome in the IT industry as a lamp-lighter or a
comptometer operator.
The pace of change in technology can render skills and qualifications obsolete in a matter
of a few years, and only the very best IT employers will bother to retrain their staff - it's
much cheaper to dump them. (Most IT posts are outsourced through agencies anyway - those that
haven't been off-shored. )
And this isn't even a good conspiracy theory; it's a bad one. He offers no evidence
that there's an actual plan or conspiracy to do this. I'm looking for an account of where the
advocates of coding education met to plot this in some castle in Europe or maybe a secret
document like "The Protocols of the Elders of Google", or some such.
Tool Users Vs Tool Makers.
The really good coders actually get why certain things work as they do and can adjust them
for different conditions. The mass produced coders are basically code copiers and code gluing
specialists.
People who get Masters and PhD's in computer science are not usually "coders" or software
engineers - they're usually involved in obscure, esoteric research for which there really is
very little demand. So it doesn't surprise me that they're unemployed. But if someone has a
Bachelor's in CS and they're unemployed I would have to wonder what they spent
their time at university doing.
The thing about software that distinguishes it from lots of other fields is that you can
make something real and significant on your own . I would expect any recent CS
major I hire to be able to show me an app or an open-source component or something similar
that they made themselves, and not just test scores and grades. If they could not then I
wouldn't even think about hiring them.
Fortunately for those of us who are actually good at coding, the difference in productivity
between a gifted coder and a non-gifted junior developer is something like 100-fold.
Knowing how to code and actually being efficient at creating software programs and systems
are about as far apart as knowing how to write and actually being able to write a bestselling
exciting Crime trilogy.
I do think there is excess supply of software programmers. There is only a modest number
of decent jobs, say as an algorithms developer in finance, general architecture of complex
systems or to some extent in systems security.
This article is about coding; most of those jobs require very little of that.
Most very high paying jobs in the technology sector are in the same standard upper
management roles as in every other industry.
How do you define "high paying". Everyone I know (and I know a lot because I've been a sw
engineer for 40 years) who is working fulltime as a software engineer is making a
high-middle-class salary, and can easily afford a home, travel on holiday, investments,
etc.
> Already there. I take it you skipped right past the employment prospects for US STEM
grads - 50% chance of finding STEM work.
That just means 50% of them are no good and need to develop their skills further or try
something else.
Not every with a STEM degree from some 3rd rate college is capable of doing complex IT or
STEM work.
So, is teaching people English or arithmetic all about reducing wages for the literate
and numerate?
Yes. Haven't you noticed how wage growth has flattened? That's because some do-gooders"
thought it would be a fine idea to educate the peasants. There was a time when only the
well-to do knew how to read and write, and that's why they well-to-do were well-to-do.
Education is evil. Stop educating people and then those of us who know how to read and write
can charge them for reading and writing letters and email. Better yet, we can have Chinese
and Indians do it for us and we just charge a transaction fee.
Massive amounts of public use cars, it doesn't mean millions need schooling in auto
mechanics. Same for software coding. We aren't even using those who have Bachelors, Masters
and PhDs in CS.
"..importing large numbers of skilled guest workers from other countries through the H1-B
visa program..."
"skilled" is good. H1B has long ( appx 17 years) been abused and turned into trafficking
scheme. One can buy H1B in India. Powerful ethnic networks wheeling & dealing in US &
EU selling IT jobs to essentially migrants.
The real IT wages haven't been stagnant but steadily falling from the 90s. It's easy to
see why. $82K/year IT wage was about average in the 90s. Comparing the prices of housing
(& pretty much everything else) between now gives you the idea.
> not every kid wants or needs to have their soul sucked out of them sitting in front of a
screen full of code for some idiotic service that some other douchbro thinks is the next
iteration of sliced bread
Taking a couple of years of programming are not enough to do this as a job, don't
worry.
But learning to code is like learning maths, - it helps to develop logical thinking, which
will benefit you in every area of your life.
"... A lot of basic entry level jobs require a good level of Excel skills. ..."
"... Programming is a cultural skill; master it, or even understand it on a simple level, and you understand how the 21st century works, on the machinery level. To bereave the children of this crucial insight is to close off a door to their future. ..."
"... What a dumpster argument. I am not a programmer or even close, but a basic understanding of coding has been important to my professional life. Coding isn't just about writing software. Understanding how algorithms work, even simple ones, is a general skill on par with algebra. ..."
"... Never mind that a good education is clearly one of the most important things you can do for a person to improve their quality of life wherever they live in the world. It's "neoliberal," so we better hate it. ..."
"... We've seen this kind of tactic for some time now. Silicon Valley is turning into a series of micromanaged sweatshops (that's what "agile" is truly all about) with little room for genuine creativity, or even understanding of what that actually means. I've seen how impossible it is to explain to upper level management how crappy cheap developers actually diminish productivity and value. All they see is that the requisition is filled for less money. ..."
"... Libertarianism posits that everyone should be free to sell their labour or negotiate their own arrangements without the state interfering. So if cheaper foreign labour really was undercutting American labout the Libertarians would be thrilled. ..."
"... Not producing enough to fill vacancies or not producing enough to keep wages at Google's preferred rate? Seeing as research shows there is no lack of qualified developers, the latter option seems more likely. ..."
"... We're already using Asia as a source of cheap labor for the tech industry. Why do we need to create cheap labor in the US? ..."
There are very few professional Scribes nowadays, a good level of reading & writing is
simplely a default even for the lowest paid jobs. A lot of basic entry level jobs require a
good level of Excel skills. Several years from now basic coding will be necessary to
manipulate basic tools for entry level jobs, especially as increasingly a lot of real code
will be generated by expert systems supervised by a tiny number of supervisors. Coding jobs
will go the same way that trucking jobs will go when driverless vehicles are perfected.
Offer the class but not mandatory. Just like I could never succeed playing football others
will not succeed at coding. The last thing the industry needs is more bad developers showing
up for a paycheck.
Programming is a cultural skill; master it, or even understand it on a simple level, and you
understand how the 21st century works, on the machinery level. To bereave the children of
this crucial insight is to close off a door to their future.
What's next, keep them off Math, because, you know . .
That's some crystal ball you have there. English teachers will need to know how to code? Same
with plumbers? Same with janitors, CEOs, and anyone working in the service industry?
The economy isn't a zero-sum game. Developing a more skilled workforce that can create more
value will lead to economic growth and improvement in the general standard of living.
Talented coders will start new tech businesses and create more jobs.
What a dumpster argument. I am not a programmer or even close, but a basic understanding of
coding has been important to my professional life. Coding isn't just about writing software.
Understanding how algorithms work, even simple ones, is a general skill on par with algebra.
But is isn't just about coding for Tarnoff. He seems to hold education in contempt
generally. "The far-fetched premise of neoliberal school reform is that education can mend
our disintegrating social fabric." If they can't literally fix everything let's just get rid
of them, right?
Never mind that a good education is clearly one of the most important things
you can do for a person to improve their quality of life wherever they live in the world.
It's "neoliberal," so we better hate it.
I agree with the basic point. We've seen this kind of tactic for some time now. Silicon
Valley is turning into a series of micromanaged sweatshops (that's what "agile" is truly all
about) with little room for genuine creativity, or even understanding of what that actually
means. I've seen how impossible it is to explain to upper level management how crappy cheap
developers actually diminish productivity and value. All they see is that the requisition is
filled for less money.
The bigger problem is that nobody cares about the arts, and as expensive as education is,
nobody wants to carry around a debt on a skill that won't bring in the bucks. And
smartphone-obsessed millennials have too short an attention span to fathom how empty their
lives are, devoid of the aesthetic depth as they are.
I can't draw a definite link, but I think algorithm fails, which are based on fanatical
reliance on programmed routines as the solution to everything, are rooted in the shortage of
education and cultivation in the arts.
Economics is a social science, and all this is merely a reflection of shared cultural
values. The problem is, people think it's math (it's not) and therefore set in stone.
Libertarianism posits that everyone should be free to sell their labour or negotiate their
own arrangements without the state interfering. So if cheaper foreign labour really was
undercutting American labout the Libertarians would be thrilled.
But it's not. I'm in my 60's and retiring but I've been a software engineer all my life.
I've worked for many different companies, and in different industries and I've never had any
trouble competing with cheap imported workers. The people I've seen fall behind were ones who
did not keep their skills fresh. When I was laid off in 2009 in my mid-50's I made sure my
mobile-app skills were bleeding edge (in those days ANYTHING having to do with mobile was
bleeding edge) and I used to go to job interviews with mobile devices to showcase what I
could do. That way they could see for themselves and not have to rely on just a CV.
They older guys who fell behind did so because their skills and toolsets had become
obsolete.
Now I'm trying to hire a replacement to write Android code for use in industrial
production and struggling to find someone with enough experience. So where is this oversupply
I keep hearing about?
Not producing enough to fill vacancies or not producing enough to keep wages at Google's
preferred rate? Seeing as research shows there is no lack of qualified developers, the latter
option seems more likely.
It's about ensuring those salaries no longer exist, by creating a source of cheap labor
for the tech industry.
We're already using Asia as a source of cheap labor for the tech industry. Why do we need
to create cheap labor in the US? That just seems inefficient.
There was never any need to give our jobs to foreigners. That is, if you are comparing the
production of domestic vs. foreign workers. The sole need was, and is, to increase profits.
Schools MAY be able to fix big social problems, but only if they teach a well-rounded
curriculum that includes classical history and the humanities. Job-specific training is
completely different. What a joke to persuade public school districts to pick up the tab on
job training. The existing social problems were not caused by a lack of programmers, and
cannot be solved by Big Tech.
I agree with the author that computer programming skills are not that limited in
availability. Big Tech solved the problem of the well-paid professional some years ago by
letting them go, these were mostly workers in their 50s, and replacing them with H1-B
visa-holders from India -- who work for a fraction of their experienced American
counterparts.
It is all about profits. Big Tech is no different than any other "industry."
Supply of apples does not affect the demand for oranges. Teaching coding in high school does
not necessarily alter the supply of software engineers. I studied Chinese History and geology
at University but my doing so has had no effect on the job prospects of people doing those
things for a living.
You would be surprised just how much a little coding knowledge has transformed my ability to
do my job (a job that is not directly related to IT at all).
Because teaching coding does not affect the supply of actual engineers. I've been a
professional software engineer for 40 years and coding is only a small fraction of what I do.
You and the linked article don't know what you're talking about. A CS degree does not equate
to a productive engineer.
A few years ago I was on the recruiting and interviewing committee to try to hire some
software engineers for a scientific instrument my company was making. The entire team had
about 60 people (hw, sw, mech engineers) but we needed 2 or 3 sw engineers with math and
signal-processing expertise. The project was held up for SIX months because we could
not find the people we needed. It would have taken a lot longer than that to train someone up
to our needs. Eventually we brought in some Chinese engineers which cost us MORE than what we
would have paid for an American engineer when you factor in the agency and visa
paperwork.
Modern software engineers are not just generic interchangable parts - 21st century
technology often requires specialised scientific, mathematical, production or business
domain-specific knowledge and those people are hard to find.
Visa jobs are part of trade agreements. To be very specific, US gov (and EU) trade Western
jobs for market access in the East. http://www.marketwatch.com/story/in-india-british-leader-theresa-may-preaches-free-trade-2016-11-07 There is no shortage. This is selling off the West's middle class. Take a look at remittances in wikipedia and you'll get a good idea just how much it costs the
US and EU economies, for sake of record profits to Western industry.
I see advantages in teaching kids to code, and for kids to make arduino and other CPU powered
things. I don't see a lot of interest in science and tech coming from kids in school. There
are too many distractions from social media and game platforms, and not much interest in
developing tools for future tech and science.
Although coding per se is a technical skill it isn't designing or integrating systems. It is
only a small, although essential, part of the whole software engineering process. Learning to
code just gets you up the first steps of a high ladder that you need to climb a fair way if
you intend to use your skills to earn a decent living.
Friend of mine in the SV tech industry reports that they are about 100,000 programmers
short in just the internet security field.
Y'all are trying to create a problem where there isn't one. Maybe we shouldn't teach them
how to read either. They might want to work somewhere besides the grill at McDonalds.
Within the next year coding will be old news and you will simply be able to describe things
in ur native language in such a way that the machine will be able to execute any set of
instructions you give it. Coding is going to change from its purely abstract form that is not
utilized at peak- but if you can describe what you envision in an effective concise manner u
could become a very good coder very quickly -- and competence will be determined entirely by
imagination and the barriers of entry will all but be extinct
Total... utter... no other way... huge... will only get worse... everyone... (not a very
nuanced commentary is it).
I'm glad pieces like this are mounting, it is relevant that we counter the mix of
messianism and opportunism of Silicon Valley propaganda with convincing arguments.
They aren't immigrants. They're visa indentured foreign workers. Why does that matter? It's
part of the cheap+indentured hiring criteria. If it were only cheap, they'd be lowballing
offers to citizen and US new grads.
Correct premises, - proletarianize programmers - many qualified graduates simply can't find jobs. Invalid conclusion: - The problem is there aren't enough good jobs to be trained for.
That conclusion only makes sense if you skip right past ... " importing large numbers of skilled guest workers from other countries through the
H1-B visa program. These workers earn less than their American counterparts, and
possess little bargaining power because they must remain employed to keep their status"
Hiring Americans doesn't "hurt" their record profits. It's incessant greed and collusion
with our corrupt congress.
This column was really annoying. I taught my students how to program when I was given a free
hand to create the computer studies curriculum for a new school I joined. (Not in the UK
thank Dog). 7th graders began with studying the history and uses of computers and
communications tech. My 8th grade learned about computer logic (AND, OR, NOT, etc) and moved
on with QuickBASIC in the second part of the year. My 9th graders learned about databases and
SQL and how to use HTML to make their own Web sites. Last year I received a phone call from
the father of one student thanking me for creating the course, his son had just received a
job offer and now works in San Francisco for Google. I am so glad I taught them "coding" (UGH) as the writer puts it, rather than arty-farty
subjects not worth a damn in the jobs market.
I live and work in Silicon Valley and you have no idea what you are talking about. There's no
shortage of coders at all. Terrific coders are let go because of their age and the
availability of much cheaper foreign coders(no, I am not opposed to immigration).
Looks like you pissed off a ton of people who can't write code and are none to happy with you
pointing out the reason they're slinging insurance for geico.
I think you're quite right that coding skills will eventually enter the mainstream and
slowly bring down the cost of hiring programmers.
The fact is that even if you don't get paid to be a programmer you can absolutely benefit
from having some coding skills.
There may however be some kind of major coding revolution with the advent of quantum
computing. The way code is written now could become obsolete.
A well-argued article that hits the nail on the head. Amongst any group of coders, very few
are truly productive, and they are self starters; training is really needed to do the admin.
There is not a huge skills shortage. That is why the author linked this EPI report analyzing
the data to prove exactly that. This may not be what people want to believe, but it is
certainly what the numbers indicate. There is no skills gap.
Yes. China and India are indeed training youth in coding skills. In order that they take jobs
in the USA and UK! It's been going on for 20 years and has resulted in many experienced IT
staff struggling to get work at all and, even if they can, to suffer stagnating wages.
Has anyones job is at risk from a 16 year old who can cobble together a couple of lines of
javascript since the dot com bubble?
Good luck trying to teach a big enough pool of US school kids regular expressions let
alone the kind of test driven continuous delivery that is the norm in the industry now.
> A lot of resumes come across my desk that look qualified on paper, but that's not the
same thing as being able to do the job
I have exactly the same experience. There is undeniable a skill gap.
It takes about a year for a skilled professional to adjust and learn enough to become
productive, it takes about 3-5 years for a college grad.
It is nothing new. But the issue is, as the college grad gets trained, another company
steal him/ her. And also keep in mind, all this time you are doing job and training the new
employee as time permits. Many companies in the US cut the non-profit department (such as IT)
to the bone, we cannot afford to lose a person and then train another replacement for 3-5
years.
The solution? Hire a skilled person. But that means nobody is training college grads and
in 10-20 years we are looking at the skill shortage to the point where the only option is
brining foreign labor.
American cut-throat companies that care only about the bottom line cannibalized
themselves.
Heh. You are not a coder, I take it. :) Going to be a few decades before even the
easiest coding jobs vanish.
Given how shit most coders of my acquaintance have been - especially in matters
of work ethic, logic, matching s/w to user requirements and willingness to test and correct
their gormless output - most future coding work will probably be in the area of disaster
recovery. Sorry, since the poor snowflakes can't face the sad facts, we have to call it
"business continuation" these days, don't we?
The demonization of Silicon Valley is clearly the next place to put all blame. Look what
"they" did to us: computers, smart phones, HD television, world-wide internet, on and on. Get
a rope!
I moved there in 1978 and watched the orchards and trailer parks on North 1st St. of San
Jose transform into a concrete jungle. There used to be quite a bit of semiconductor
equipment and device manufacturing in SV during the 80s and 90s. Now quite a few buildings
have the same name : AVAILABLE. Most equipment and device manufacturing has moved to
Asia.
Programming started with binary, then machine code (hexadecimal or octal) and moved to
assembler as a compiled and linked structure. More compiled languages like FORTRAN, BASIC,
PL-1, COBOL, PASCAL, C (and all its "+'s") followed making programming easier for the less
talented. Now the script based languages (HTML, JAVA, etc.) are even higher level and
accessible to nearly all. Programming has become a commodity and will be priced like milk,
wheat, corn, non-unionized workers and the like. The ship has sailed on this activity as a
career.
How to Use "Script" Command To Record Linux Terminal Session May 30, 2014 By
Pungki Arianto Updated June
14, 2017 FacebookGoogle+
Twitter
Pinterest
LinkedIn
StumbleUpon
Reddit
Email This script command is very helpful for system admin. If any problem occurs to the
system, it is very difficult to find what command was executed previously. Hence, system admin
knows the importance of this script command. Sometimes you are on the server and you think to
yourself that your team or somebody you know is actually missing a documentation on how to do a
specific configuration. It is possible for you to do the configuration, record all actions of
your shell session and show the record to the person who will see exactly what you had (the
same output) on your shell at the moment of the configuration. How does script command work?
script command records a shell session for you so that you can look at the output that you
saw at the time and you can even record with timing so that you can have a real-time playback.
It is really useful and comes in handy in the strangest kind of times and places.
The script command keeps action log for various tasks. The script records everything in a
session such as things you type, things you see. To do this you just type script
command on the terminal and type exit when finished. Everything between the
script and the exit command is logged to the file. This includes the
confirmation messages from script itself.
1. Record your terminal session
script makes a typescript of everything printed on your terminal. If the argument file is
given, script saves all dialogue in the indicated file in the current directory. If no file
name is given, the typescript is saved in default file typescript. To record your shell session
so what you are doing in the current shell, just use the command below
# script shell_record1
Script started, file is shell_record1
It indicates that a file shell_record1 is created. Let's check the file
# ls -l shell_*
-rw-r--r-- 1 root root 0 Jun 9 17:50 shell_record1
After completion of your task, you can enter exit or Ctrl-d to close
down the script session and save the file.
# exit
exit
Script done, file is shell_record1
You can see that script indicates the filename.
2. Check the content of a recorded
terminal session
When you use script command, it records everything in a session such as things you type so
all your output. As the output is saved into a file, it is possible after to check its content
after existing a recorded session. You can simply use a text editor command or a text file
command viewer.
# cat shell_record1
Script started on Fri 09 Jun 2017 06:23:41 PM UTC
[root@centos-01 ~]# date
Fri Jun 9 18:23:46 UTC 2017
[root@centos-01 ~]# uname -a
Linux centos-01 3.10.0-514.16.1.el7.x86_64 #1 SMP Wed Apr 12 15:04:24 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[root@centos-01 ~]# whoami
root
[root@centos-01 ~]# pwd
/root
[root@centos-01 ~]# exit
exit
Script done on Fri 09 Jun 2017 06:25:11 PM UTC
While you view the file you realize that the script also stores line feeds and backspaces.
It also indicates the time of the recording to the top and the end of the file.
3. Record
several terminal session
You can record several terminal session as you want. When you finish a record, just begin
another new session record. It can be helpful if you want to record several configurations that
you are doing to show it to your team or students for example. You just need to name each
recording file.
For example, let us assume that you have to do OpenLDAP
, DNS
, Machma
configurations. You will need to record each configuration. To do this, just create recording
file corresponding to each configuration when finished.
And so on for the other. Note that if you script command followed by existing filename, the
file will be replaced. So you will lost everything.
Now, let us imagine that you have begun Machma configuration but you have to abort its
configuration in order to finish DNS configuration because of some emergency case. Now you want
to continue the machma configuration where you left. It means you want to record the next steps
into the existing file machma_record without deleting its previous content; to do this
you will use script -a command to append the new output to the file.
This is the content of our recorded file
Now if we want to continue our recording in this file without deleting the content already
present, we will do
# script -a machma_record
Script started, file is machma_record
Now continue the configuration, then exit when finished and let's check the content of the
recorded file.
Note the new time of the new record which appears. You can see that the file has the
previous and actual records.
4. Replay a linux terminal session
We have seen that it is possible to see the content of the recorded file with commands to
display a text file content. The script command also gives the possibility to see the recorded
session as a video. It means that you will review exactly what you have done step by
step at the moment you were entering the commands as if you were looking a video. So you will
playback/replay the recorded terminal session.
To do it, you have to use --timing option of script command when you will start
the record.
# script --timing=file_time shell_record1
Script started, file is shell_record1
See that the file into which to record is shell_record1. When the record is
finished, exit normally
The --timing option outputs timing data to the file indicated. This data
contains two fields, separated by a space which indicates how much time elapsed since the
previous output how many characters were output this time. This information can be used to
replay typescripts with realistic typing and output delays.
Now to replay the terminal session, we use scriptreplay command instead of script command
with the same syntax when recording the session. Look below
# scriptreplay --timing=file_time shell_record1
You will that the recorded session with be played as if you were looking a video which was
recording all that you were doing. You can just insert the timing file without indicating all
the --timing=file_time. Look below
# scriptreplay file_time shell_record1
So you understand that the first parameter is the timing file and the second is the recorded
file.
Conclusion
The script command can be your to-go tool for documenting your work and showing others what
you did in a session. It can be used as a way to log what you are doing in a shell session.
When you run script, a new shell is forked. It reads standard input and output for your
terminal tty and stores the data in a file.
Systemd is a system and service manager for Linux operating systems which introduces the concept
of systemd units and provides a number of features such as parallel startup of system services at
boot time, on-demand activation of daemons, etc. It helps to manage services on your Linux OS such
as starting/stopping/reloading. But to operate on services with systemd, you need to know the different
services launched and the name which exactly matches the service. There is a tool provided which
can help Linux users to navigate through the different services available on your Linux as you do
for the different process in progress on your system with top command.
What is chkservice?
chkservice is a new and handy tool for systemd units management in a terminal. It is a
GitHub project developed by
Svetlana Linuxenko. It has the particularity to list the differents services presents on your system.
You have a view of each service available and you are able to manage it as you want.
You may use spaces, parentheses and so forth, if you quote the expression:
$ let a='(5+2)*3'
For a full list of operators availabile, see help let or the manual.
Next, the actual arithmetic evaluation compound command syntax:
$ ((a=(5+2)*3))
This is equivalent to let , but we can also use it as a command , for
example in an if statement:
$ if (($a == 21)); then echo 'Blackjack!'; fi
Operators such as == , < , > and so on cause a comparison
to be performed, inside an arithmetic evaluation. If the comparison is "true" (for example,
10 > 2 is true in arithmetic -- but not in strings!) then the compound command
exits with status 0. If the comparison is false, it exits with status 1. This makes it suitable
for testing things in a script.
Although not a compound command, an arithmetic substitution (or arithmetic
expression ) syntax is also available:
$ echo "There are $(($rows * $columns)) cells"
Inside $((...)) is an arithmetic context , just like with ((...))
, meaning we do arithmetic (multiplying things) instead of string manipulations (concatenating
$rows , space, asterisk, space, $columns ). $((...)) is also
portable to the POSIX shell, while ((...)) is not.
Readers who are familiar with the C programming language might wish to know that
((...)) has many C-like features. Among them are the ternary operator:
$ ((abs = (a >= 0) ? a : -a))
and the use of an integer value as a truth value:
$ if ((flag)); then echo "uh oh, our flag is up"; fi
Note that we used variables inside ((...)) without prefixing them with $
-signs. This is a special syntactic shortcut that Bash allows inside arithmetic evaluations and
arithmetic expressions.
There is one final thing we must mention about ((flag)) . Because the inside of
((...)) is C-like, a variable (or expression) that evaluates to zero will be
considered false for the purposes of the arithmetic evaluation. Then, because the
evaluation is false, it will exit with a status of 1. Likewise, if the expression
inside ((...)) is non-zero , it will be considered true ; and since
the evaluation is true, it will exit with status 0. This is potentially very
confusing, even to experts, so you should take some time to think about this. Nevertheless,
when things are used the way they're intended, it makes sense in the end:
$ flag=0 # no error
$ while read line; do
> if [[ $line = *err* ]]; then flag=1; fi
> done < inputfile
$ if ((flag)); then echo "oh no"; fi
I have two questions, the first one is most important:
How do I take 65 and turn it into A?
\'A converts an ASCII character to its value using printf. Is the syntax
specific to printf or is it used anywhere else in BASH? (Such small
strings are hard to Google for.)
For your second question, it seems the leading-quote syntax ( \'A ) is specific
to printf :
If the leading character is a single-quote or double-quote, the value shall be the
numeric value in the underlying codeset of the character following the single-quote or
double-quote.
You can use tr to convert from DOS to Unix; however, you can only do this safely
if CR appears in your file only as the first byte of a CRLF byte pair. This is usually the
case. You then use:
tr -d '\015' <DOS-file >UNIX-file
Note that the name DOS-file is different from the name UNIX-file
; if you try to use the same name twice, you will end up with no data in the file.
You can't do it the other way round (with standard 'tr').
If you know how to enter carriage return into a script ( control-V ,
control-M to enter control-M), then:
sed 's/^M$//' # DOS to Unix
sed 's/$/^M/' # Unix to DOS
where the '^M' is the control-M character. You can also use the bashANSI-C Quoting
mechanism to specify the carriage return:
sed $'s/\r$//' # DOS to Unix
sed $'s/$/\r/' # Unix to DOS
However, if you're going to have to do this very often (more than once, roughly speaking),
it is far more sensible to install the conversion programs (e.g. dos2unix and unix2dos , or perhaps dtou and
utod ) and use
them.
# IN UNIX ENVIRONMENT: convert DOS newlines (CR/LF) to Unix format.
sed 's/.$//' # assumes that all lines end with CR/LF
sed 's/^M$//' # in bash/tcsh, press Ctrl-V then Ctrl-M
sed 's/\x0D$//' # works on ssed, gsed 3.02.80 or higher
# IN UNIX ENVIRONMENT: convert Unix newlines (LF) to DOS format.
sed "s/$/`echo -e \\\r`/" # command line under ksh
sed 's/$'"/`echo \\\r`/" # command line under bash
sed "s/$/`echo \\\r`/" # command line under zsh
sed 's/$/\r/' # gsed 3.02.80 or higher
Use sed -i
for in-place conversion e.g. sed -i 's/..../' file .
This problem can be solved with standard tools, but there are sufficiently many traps for the
unwary that I recommend you install the flip command, which was written over
20 years ago by Rahul Dhesi, the author of zoo . It does an excellent job
converting file formats while, for example, avoiding the inadvertant destruction of binary
files, which is a little too easy if you just race around altering every CRLF you see...
The solutions posted so far only deal with part of the problem, converting DOS/Windows' CRLF
into Unix's LF; the part they're missing is that DOS use CRLF as a line separator ,
while Unix uses LF as a line terminator . The difference is that a DOS file
(usually) won't have anything after the last line in the file, while Unix will. To do the
conversion properly, you need to add that final LF (unless the file is zero-length, i.e. has
no lines in it at all). My favorite incantation for this (with a little added logic to handle
Mac-style CR-separated files, and not molest files that're already in unix format) is a bit
of perl:
brew install dos2unix
for csv in *.csv; do dos2unix -c mac ${csv}; done;
Make sure you have made copies of the files, as this command will modify the files in
place. The -c mac option makes the switch to be compatible with osx.
You can use awk. Set the record separator ( RS ) to a regexp that matches all
possible newline character, or characters. And set the output record separator (
ORS ) to the unix-style newline character.
Had just to ponder that same question (on Windows-side, but equally applicable to linux.)
Surprisingly nobody mentioned a very much automated way of doing CRLF<->LF conversion
for text-files using good old zip -ll option (Info-ZIP):
zip -ll textfiles-lf.zip files-with-crlf-eol.*
unzip textfiles-lf.zip
NOTE: this would create a zip file preserving the original file names but converting the
line endings to LF. Then unzip would extract the files as zip'ed, that is with
their original names (but with LF-endings), thus prompting to overwrite the local original
files if any.
Relevant excerpt from the zip --help :
zip --help
...
-l convert LF to CR LF (-ll CR LF to LF)
(wired.com)
Posted by EditorDavid on Saturday September 23, 2017 @09:30PM from the looking-inside dept.
Amazon aggressively recruited thousands of retirees living in mobile homes to migrate to
Amazon's warehouses for seasonal work, according to a story shared by nightcats . Wired reports: From a hiring perspective,
the RVers were a dream labor force. They showed up on demand and dispersed just before
Christmas in what the company cheerfully called a "taillight parade." They asked for
little in the way of benefits or protections . And though warehouse jobs were physically
taxing -- not an obvious fit for older bodies -- recruiters came to see CamperForce workers'
maturity as an asset. These were diligent, responsible employees. Their attendance rates were
excellent. "We've had folks in their eighties who do a phenomenal job for us," noted Kelly
Calmes, a CamperForce representative, in one online recruiting seminar... In a company
presentation, one slide read, "Jeff Bezos has predicted that, by the year 2020, one out of
every four workampers in the United States will have worked for Amazon." The article is
adapted from a new book called " Nomadland
," which also describes seniors in mobile homes being recruited for sugar beet harvesting and
jobs at an Iowa amusement park, as well as work as campground hsots at various national parks.
Many of them "could no longer afford traditional housing," especially after the financial
downturn of 2008. But at least they got to hear stories from their trainers at Amazon about the
occasional "unruly" shelf-toting "Kiva" robot: They told us how one robot had tried to drag
a worker's stepladder away. Occasionally, I was told, two Kivas -- each carrying a tower of
merchandise -- collided like drunken European soccer fans bumping chests. And in April of that
year, the Haslet fire department responded to an accident at the warehouse involving a can of
"bear repellent" (basically industrial-grade pepper spray). According to fire department
records, the can of repellent was run over by a Kiva and the warehouse had to be
evacuated.
"... He's also a sort of maritime-technology historian. A tall, white-haired man in a baseball cap, shark t-shirt and boat shoes, Benjamin said he's spent the last 15 years "making vehicles wet." He has the U.S. armed forces to thank for making his autonomous work possible. The military sparked the field of marine autonomy decades ago, when it began demanding underwater robots for mine detection, ..."
"... In 2006, Benjamin launched his open-source software project. With it, a computer is able to take over a boat's navigation-and-control system. Anyone can write programs for it. The project is funded by the U.S. Office for Naval Research and Battelle Memorial Institute, a nonprofit. Benjamin said there are dozens of types of vehicles using the software, which is called MOOS-IvP. ..."
Frank Marino, an engineer with Sea Machines Robotics, uses a remote control belt pack to control
a self-driving boat in Boston Harbor. (Bloomberg) -- Frank Marino sat in a repurposed U.S. Coast
Guard boat bobbing in Boston Harbor one morning late last month. He pointed the boat straight at
a buoy several hundred yards away, while his colleague Mohamed Saad Ibn Seddik used a laptop to set
the vehicle on a course that would run right into it. Then Ibn Seddik flipped the boat into autonomous
driving mode. They sat back as the vessel moved at a modest speed of six knots, smoothly veering
right to avoid the buoy, and then returned to its course.
In a slightly apologetic tone, Marino acknowledged the experience wasn't as harrowing as barreling
down a highway in an SUV that no one is steering. "It's not like a self-driving car, where the wheel
turns on its own," he said. Ibn Seddik tapped in directions to get the boat moving back the other
way at twice the speed. This time, the vessel kicked up a wake, and the turn felt sharper, even as
it gave the buoy the same wide berth as it had before. As far as thrills go, it'd have to do. Ibn
Seddik said going any faster would make everyone on board nauseous.
The two men work for Sea Machines Robotics Inc., a three-year old company developing computer
systems for work boats that can make them either remote-controllable or completely autonomous. In
May, the company spent $90,000 to buy the Coast Guard hand-me-down at a government auction. Employees
ripped out one of the four seats in the cabin to make room for a metal-encased computer they call
a "first-generation autonomy cabinet." They painted the hull bright yellow and added the words "Unmanned
Vehicle" in big, red letters. Cameras are positioned at the stern and bow, and a dome-like radar
system and a digital GPS unit relay additional information about the vehicle's surroundings. The
company named its new vessel Steadfast.
Autonomous maritime vehicles haven't drawn as much the attention as self-driving cars, but they're
hitting the waters with increased regularity. Huge shipping interests, such as Rolls-Royce Holdings
Plc, Tokyo-based fertilizer producer Nippon Yusen K.K. and BHP Billiton Ltd., the world's largest
mining company, have all recently announced plans to use driverless ships for large-scale ocean transport.
Boston has become a hub for marine technology startups focused on smaller vehicles, with a handful
of companies like Sea Machines building their own autonomous systems for boats, diving drones and
other robots that operate on or under the water.
As Marino and Ibn Seddik were steering Steadfast back to dock, another robot boat trainer, Michael
Benjamin, motored past them. Benjamin, a professor at Massachusetts Institute of Technology, is a
regular presence on the local waters. His program in marine autonomy, a joint effort by the school's
mechanical engineering and computer science departments, serves as something of a ballast for Boston's
burgeoning self-driving boat scene. Benjamin helps engineers find jobs at startups and runs an open-source
software project that's crucial to many autonomous marine vehicles.
He's also a sort of maritime-technology historian. A tall, white-haired man in a baseball
cap, shark t-shirt and boat shoes, Benjamin said he's spent the last 15 years "making vehicles wet."
He has the U.S. armed forces to thank for making his autonomous work possible. The military sparked
the field of marine autonomy decades ago, when it began demanding underwater robots for mine detection,
Benjamin explained from a chair on MIT's dock overlooking the Charles River. Eventually, self-driving
software worked its way into all kinds of boats.
These systems tended to chart a course based on a specific script, rather than sensing and responding
to their environments. But a major shift came about a decade ago, when manufacturers began allowing
customers to plug in their own autonomy systems, according to Benjamin. "Imagine where the PC revolution
would have gone if the only one who could write software on an IBM personal computer was IBM," he
said.
In 2006, Benjamin launched his open-source software project. With it, a computer is able to
take over a boat's navigation-and-control system. Anyone can write programs for it. The project is
funded by the U.S. Office for Naval Research and Battelle Memorial Institute, a nonprofit. Benjamin
said there are dozens of types of vehicles using the software, which is called MOOS-IvP.
Startups using MOOS-IvP said it has created a kind of common vocabulary. "If we had a proprietary
system, we would have had to develop training and train new employees," said Ibn Seddik. "Fortunately
for us, Mike developed a course that serves exactly that purpose."
Teaching a boat to drive itself is easier than conditioning a car in some ways. They typically
don't have to deal with traffic, stoplights or roundabouts. But water is unique challenge. "The structure
of the road, with traffic lights, bounds your problem a little bit," said Benjamin. "The number of
unique possible situations that you can bump into is enormous." At the moment, underwater robots
represent a bigger chunk of the market than boats. Sales are expected to hit $4.6 billion in 2020,
more than double the amount from 2015, according to ABI Research. The biggest customer is the military.
Several startups hope to change that. Michael Johnson, Sea Machines' chief executive officer,
said the long-term potential for self-driving boats involves teams of autonomous vessels working
in concert. In many harbors, multiple tugs bring in large container ships, communicating either through
radio or by whistle. That could be replaced by software controlling all the boats as a single system,
Johnson said.
Sea Machines' first customer is Marine Spill Response Corp., a nonprofit group funded by oil companies.
The organization operates oil spill response teams that consist of a 210-foot ship paired with a
32-foot boat, which work together to drag a device collecting oil. Self-driving boats could help
because staffing the 32-foot boat in choppy waters or at night can be dangerous, but the theory needs
proper vetting, said Judith Roos, a vice president for MSRC. "It's too early to say, 'We're going
to go out and buy 20 widgets.'"
Another local startup, Autonomous Marine Systems Inc., has been sending boats about 10 miles out
to sea and leaving them there for weeks at a time. AMS's vehicles are designed to operate for long
stretches, gathering data in wind farms and oil fields. One vessel is a catamaran dubbed the Datamaran,
a name that first came from an employee's typo, said AMS CEO Ravi Paintal. The company also uses
Benjamin's software platform. Paintal said AMS's longest missions so far have been 20 days, give
or take. "They say when your boat can operate for 30 days out in the ocean environment, you'll be
in the running for a commercial contract," he said.
"... To emulate those capabilities on computers will probably require another 100 years or more. Selective functions can be imitated even now (manipulator that deals with blocks in a pyramid was created in 70th or early 80th I think, but capabilities of human "eye controlled arm" is still far, far beyond even wildest dreams of AI. ..."
"... Similarly human intellect is completely different from AI. At the current level the difference is probably 1000 times larger then the difference between a child with Down syndrome and a normal person. ..."
"... Human brain is actually a machine that creates languages for specific domain (or acquire them via learning) and then is able to operate in terms of those languages. Human child forced to grow up with animals, including wild animals, learns and is able to use "animal language." At least to a certain extent. Some of such children managed to survive in this environment. ..."
"... If you are bilingual, try Google translate on this post. You might be impressed by their recent progress in this field. It did improved considerably and now does not cause instant laugh. ..."
"... One interesting observation that I have is that automation is not always improve functioning of the organization. It can be quite opposite :-). Only the costs are cut, and even that is not always true. ..."
"... Of course the last 25 years (or so) were years of tremendous progress in computers and networking that changed the human civilization. And it is unclear whether we reached the limit of current capabilities or not in certain areas (in CPU speeds and die shrinking we probably did; I do not expect anything significant below 7 nanometers: https://en.wikipedia.org/wiki/7_nanometer ). ..."
"When combined with our brains, human fingers are amazingly fine manipulation devices."
Not only fingers. The whole human arm is an amazing device. Pure magic, if you ask me.
To emulate those capabilities on computers will probably require another 100 years or more.
Selective functions can be imitated even now (manipulator that deals with blocks in a pyramid
was created in 70th or early 80th I think, but capabilities of human "eye controlled arm" is still
far, far beyond even wildest dreams of AI.
Similarly human intellect is completely different from AI. At the current level the difference
is probably 1000 times larger then the difference between a child with Down syndrome and a normal
person.
Human brain is actually a machine that creates languages for specific domain (or acquire
them via learning) and then is able to operate in terms of those languages. Human child forced
to grow up with animals, including wild animals, learns and is able to use "animal language."
At least to a certain extent. Some of such children managed to survive in this environment.
Such cruel natural experiments have shown that the level of flexibility of human brain is something
really incredible. And IMHO can not be achieved by computers (although never say never).
Here we are talking about tasks that are 1 million times more complex task that playing GO
or chess, or driving a car on the street.
The limits of AI are clearly visible when we see the quality of translation from one language
to another. For more or less complex technical text it remains medium to low. As in "requires
human editing".
If you are bilingual, try Google translate on this post. You might be impressed by their
recent progress in this field. It did improved considerably and now does not cause instant laugh.
Same thing with the speech recognition. The progress is tremendous, especially the last three-five
years. But it is still far from perfect. Now, with a some training, programs like Dragon are quite
usable as dictation device on, say PC with 4 core 3GHz CPU with 16 GB of memory (especially if
you are native English speaker), but if you deal with special text or have strong accent, they
still leaves much to be desired (although your level of knowledge of the program, experience and
persistence can improve the results considerably.
One interesting observation that I have is that automation is not always improve functioning
of the organization. It can be quite opposite :-). Only the costs are cut, and even that is not
always true.
Of course the last 25 years (or so) were years of tremendous progress in computers and
networking that changed the human civilization. And it is unclear whether we reached the limit
of current capabilities or not in certain areas (in CPU speeds and die shrinking we probably did;
I do not expect anything significant below 7 nanometers:
https://en.wikipedia.org/wiki/7_nanometer
).
"... It's fine to acknowledge a misstep. But spin the answer to focus on why this new situation is such an ideal match of your abilities to the employer's needs. ..."
I have been in my present position for over 25 years. Five years ago, I was assigned
a new boss, who has a reputation in my industry for harassing people in positions such as mine
until they quit. I have managed to survive, but it's clear that it's time for me to move along.
How should I answer the inevitable interview question: Why would I want to leave after so long?
I've heard that speaking badly of a boss is an interview no-no, but it really is the only reason
I'm looking to find something new. BROOKLYN
I am unemployed and interviewing for a new job. I have read that when answering interview
questions, it's best to keep everything you say about previous work experiences or managers positive.
But what if you've made one or two bad choices in the past: taking jobs because you
needed them, figuring you could make it work - then realizing the culture was a bad fit, or you
had an arrogant, narcissistic boss?
Nearly everyone has had a bad work situation or boss. I find it refreshing when I read
stories about successful people who mention that they were fired at some point, or didn't get
along with a past manager. So why is it verboten to discuss this in an interview? How can the
subject be addressed without sounding like a complainer, or a bad employee? CHICAGO
As these queries illustrate, the temptation to discuss a negative work situation can be strong
among job applicants. But in both of these situations, and in general, criticizing a current or past
employer is a risky move. You don't have to paint a fictitiously rosy picture of the past, but
dwelling on the negative can backfire. Really, you don't want to get into a detailed explanation
of why you have or might quit at all. Instead, you want to talk about why you're such a perfect fit
for the gig you're applying for.
So, for instance, a question about leaving a long-held job could be answered by suggesting that
the new position offers a chance to contribute more and learn new skills by working with a stronger
team. This principle applies in responding to curiosity about jobs that you held for only a short
time.
It's fine to acknowledge a misstep. But spin the answer to focus on why this new situation
is such an ideal match of your abilities to the employer's needs.
The truth is, even if you're completely right about the past, a prospective employer doesn't really
want to hear about the workplace injustices you've suffered, or the failings of your previous employer.
A manager may even become concerned that you will one day add his or her name to the list of people
who treated you badly. Save your cathartic outpourings for your spouse, your therapist, or, perhaps,
the future adoring profile writer canonizing your indisputable success.
Send your workplace conundrums to [email protected], including your name and contact
information (even if you want it withheld for publication). The Workologist is a guy with well-intentioned
opinions, not a professional career adviser. Letters may be edited.
Numerous Slashdot readers are reporting that they are facing issues access Google Drive, the productivity
suite from the Mountain View-based company. Google's dashboard confirms that
Drive is facing outage
.
Third-party web monitoring tool DownDetector also
reports thousands of similar complaints from users. The company said, "Google Drive service has
already been restored for some users, and we expect a resolution for all users in the near future.
Please note this time frame is an estimate and may change. Google Drive is not loading files and
results in a failures for a subset of users."
"... Karen Panetta, the dean of graduate engineering education at Tufts University and the vice president of communications and public relations at the IEEE-USA, believes the outcome for tech will be Logan's Run -like, where age sets a career limit... ..."
"... It's great to get the new hot shot who just graduated from college, but it's also important to have somebody with 40 years of experience who has seen all of the changes in the industry and can offer a different perspective." ..."
Will the median age of tech firms rise as the Millennial generation
grows older...? The median age range at Google, Facebook, SpaceX, LinkedIn,
Amazon, Salesforce, Apple and Adobe, is 29 to 31, according to a study last
year by PayScale, which analyzes self-reported data...
Karen Panetta, the dean
of graduate engineering education at Tufts University and the vice president
of communications and public relations at the IEEE-USA, believes the outcome
for tech will be Logan's Run -like, where age sets a career limit...
Tech firms want people with the current skills sets and those "without those
skills will be pressured to leave or see minimal career progression," said Panetta...
The idea that the tech industry may have an age bias is not scaring the new
college grads away. "They see retirement so far off, so they are more interested
in how to move up or onto new startup ventures or even business school," said
Panetta.
"The reality sets in when they have families and companies downsize
and it's not so easy to just pick up and go on to another company," she said.
None of this may be a foregone conclusion.
Millennials may see the experience
of today's older workers as a cautionary tale, and usher in cultural changes... David Kurtz, a labor relations partner at Constangy, Brooks, Smith & Prophete,
suggests tech firms should be sharing age-related date about their workforce,
adding "The more of a focus you place on an issue the more attention it gets
and the more likely that change can happen.
It's great to get the new hot shot
who just graduated from college, but it's also important to have somebody with
40 years of experience who has seen all of the changes in the industry and can
offer a different perspective."
Option 1a: While loop: Single line at a time: Input redirection
#!/bin/bash
filename='peptides.txt'
echo Start
while read p; do
echo $p
done < $filename
Option 1b: While loop: Single line at a time:
Open the file, read from a file descriptor (in this case file descriptor #4).
#!/bin/bash
filename='peptides.txt'
exec 4<$filename
echo Start
while read -u4 p ; do
echo $p
done
Option 2: For loop: Read file into single variable and parse.
This syntax will parse "lines" based on any white space between the tokens. This still works because
the given input file lines are single work tokens. If there were more than one token per line,
then this method would not work as well. Also, reading the full file into a single variable is
not a good strategy for large files.
#!/bin/bash
filename='peptides.txt'
filelines=`cat $filename`
echo Start
for line in $filelines ; do
echo $line
done
This is no better than other answers, but is one more way to get the job done in a file without
spaces (see comments). I find that I often need one-liners to dig through lists in text files
without the extra step of using separate script files.
for word in $(cat peptides.txt); do echo $word; done
This format allows me to put it all in one command-line. Change the "echo $word" portion to
whatever you want and you can issue multiple commands separated by semicolons. The following example
uses the file's contents as arguments into two other scripts you may have written.
for word in $(cat peptides.txt); do cmd_a.sh $word; cmd_b.py $word; done
Or if you intend to use this like a stream editor (learn sed) you can dump the output to another
file as follows.
for word in $(cat peptides.txt); do cmd_a.sh $word; cmd_b.py $word; done > outfile.txt
I've used these as written above because I have used text files where I've created them with
one word per line. (See comments) If you have spaces that you don't want splitting your words/lines,
it gets a little uglier, but the same command still works as follows:
OLDIFS=$IFS; IFS=$'\n'; for line in $(cat peptides.txt); do cmd_a.sh $line; cmd_b.py $line; done > outfile.txt; IFS=$OLDIFS
This just tells the shell to split on newlines only, not spaces, then returns the environment
back to what it was previously. At this point, you may want to consider putting it all into a
shell script rather than squeezing it all into a single line, though.
A few more things not covered by other answers: Reading from a delimited file
# ':' is the delimiter here, and there are three fields on each line in the file
# IFS set below is restricted to the context of `read`, it doesn't affect any other code
while IFS=: read -r field1 field2 field3; do
# process the fields
# if the line has less than three fields, the missing fields will be set to an empty string
# if the line has more than three fields, `field3` will get all the values, including the third field plus the delimiter(s)
done < input.txt
Reading from more than one file at a time
while read -u 3 -r line1 && read -u 4 -r line2; do
# process the lines
# note that the loop will end when we reach EOF on either of the files, because of the `&&`
done 3< input1.txt 4< input2.txt
Reading a whole file into an array (Bash version 4+)
readarray -t my_array < my_file
or
mapfile -t my_array < my_file
And then
for line in "${my_array[@]}"; do
# process the lines
done
#!/bin/bash
#
# Change the file name from "test" to desired input file
# (The comments in bash are prefixed with #'s)
for x in $(cat test.txt)
do
echo $x
done
$ cat /tmp/test.txt
Line 1
Line 2 has leading space
Line 3 followed by blank line
Line 5 (follows a blank line) and has trailing space
Line 6 has no ending CR
There are four elements that will alter the meaning of the file output read by many Bash solutions:
The blank line 4;
Leading or trailing spaces on two lines;
Maintaining the meaning of individual lines (i.e., each line is a record);
The line 6 not terminated with a CR.
If you want the text file line by line including blank lines and terminating lines without
CR, you must use a while loop and you must have an alternate test for the final line.
Here are the methods that may change the file (in comparison to what cat returns):
1) Lose the last line and leading and trailing spaces:
$ while read -r p; do printf "%s\n" "'$p'"; done </tmp/test.txt
'Line 1'
'Line 2 has leading space'
'Line 3 followed by blank line'
''
'Line 5 (follows a blank line) and has trailing space'
(If you do while IFS= read -r p; do printf "%s\n" "'$p'"; done </tmp/test.txt
instead, you preserve the leading and trailing spaces but still lose the last line if it is not
terminated with CR)
2) Using process substitution with cat will reads the entire file in one gulp
and loses the meaning of individual lines:
$ for p in "$(cat /tmp/test.txt)"; do printf "%s\n" "'$p'"; done
'Line 1
Line 2 has leading space
Line 3 followed by blank line
Line 5 (follows a blank line) and has trailing space
Line 6 has no ending CR'
(If you remove the " from $(cat /tmp/test.txt) you read the file
word by word rather than one gulp. Also probably not what is intended...)
The most robust and simplest way to read a file line-by-line and preserve all spacing is:
$ while IFS= read -r line || [[ -n $line ]]; do printf "'%s'\n" "$line"; done </tmp/test.txt
'Line 1'
' Line 2 has leading space'
'Line 3 followed by blank line'
''
'Line 5 (follows a blank line) and has trailing space '
'Line 6 has no ending CR'
If you want to strip leading and trading spaces, remove the IFS= part:
$ while read -r line || [[ -n $line ]]; do printf "'%s'\n" "$line"; done </tmp/test.txt
'Line 1'
'Line 2 has leading space'
'Line 3 followed by blank line'
''
'Line 5 (follows a blank line) and has trailing space'
'Line 6 has no ending CR'
(A text file without a terminating \n , while fairly common, is considered broken
under POSIX. If you can count on the trailing \n you do not need || [[ -n $line
]] in the while loop.)
Here is my real life example how to loop lines of another program output, check for substrings,
drop double quotes from variable, use that variable outside of the loop. I guess quite many is
asking these questions sooner or later.
##Parse FPS from first video stream, drop quotes from fps variable
## streams.stream.0.codec_type="video"
## streams.stream.0.r_frame_rate="24000/1001"
## streams.stream.0.avg_frame_rate="24000/1001"
FPS=unknown
while read -r line; do
if [[ $FPS == "unknown" ]] && [[ $line == *".codec_type=\"video\""* ]]; then
echo ParseFPS $line
FPS=parse
fi
if [[ $FPS == "parse" ]] && [[ $line == *".r_frame_rate="* ]]; then
echo ParseFPS $line
FPS=${line##*=}
FPS="${FPS%\"}"
FPS="${FPS#\"}"
fi
done <<< "$(ffprobe -v quiet -print_format flat -show_format -show_streams -i "$input")"
if [ "$FPS" == "unknown" ] || [ "$FPS" == "parse" ]; then
echo ParseFPS Unknown frame rate
fi
echo Found $FPS
Declare variable outside of the loop, set value and use it outside of loop requires done
<<< "$(...)" syntax. Application need to be run within a context of current console. Quotes
around the command keeps newlines of output stream.
Loop match for substrings then reads name=value pair, splits right-side part of last
= character, drops first quote, drops last quote, we have a clean value to be used elsewhere.
Option 1a: While loop: Single line at a time: Input redirection
#!/bin/bash
filename='peptides.txt'
echo Start
while read p; do
echo $p
done < $filename
Option 1b: While loop: Single line at a time:
Open the file, read from a file descriptor (in this case file descriptor #4).
#!/bin/bash
filename='peptides.txt'
exec 4<$filename
echo Start
while read -u4 p ; do
echo $p
done
Option 2: For loop: Read file into single variable and parse.
This syntax will parse "lines" based on any white space between the tokens. This still works because
the given input file lines are single work tokens. If there were more than one token per line,
then this method would not work as well. Also, reading the full file into a single variable is
not a good strategy for large files.
#!/bin/bash
filename='peptides.txt'
filelines=`cat $filename`
echo Start
for line in $filelines ; do
echo $line
done
This is no better than other answers, but is one more way to get the job done in a file without
spaces (see comments). I find that I often need one-liners to dig through lists in text files
without the extra step of using separate script files.
for word in $(cat peptides.txt); do echo $word; done
This format allows me to put it all in one command-line. Change the "echo $word" portion to
whatever you want and you can issue multiple commands separated by semicolons. The following example
uses the file's contents as arguments into two other scripts you may have written.
for word in $(cat peptides.txt); do cmd_a.sh $word; cmd_b.py $word; done
Or if you intend to use this like a stream editor (learn sed) you can dump the output to another
file as follows.
for word in $(cat peptides.txt); do cmd_a.sh $word; cmd_b.py $word; done > outfile.txt
I've used these as written above because I have used text files where I've created them with
one word per line. (See comments) If you have spaces that you don't want splitting your words/lines,
it gets a little uglier, but the same command still works as follows:
OLDIFS=$IFS; IFS=$'\n'; for line in $(cat peptides.txt); do cmd_a.sh $line; cmd_b.py $line; done > outfile.txt; IFS=$OLDIFS
This just tells the shell to split on newlines only, not spaces, then returns the environment
back to what it was previously. At this point, you may want to consider putting it all into a
shell script rather than squeezing it all into a single line, though.
A few more things not covered by other answers: Reading from a delimited file
# ':' is the delimiter here, and there are three fields on each line in the file
# IFS set below is restricted to the context of `read`, it doesn't affect any other code
while IFS=: read -r field1 field2 field3; do
# process the fields
# if the line has less than three fields, the missing fields will be set to an empty string
# if the line has more than three fields, `field3` will get all the values, including the third field plus the delimiter(s)
done < input.txt
Reading from more than one file at a time
while read -u 3 -r line1 && read -u 4 -r line2; do
# process the lines
# note that the loop will end when we reach EOF on either of the files, because of the `&&`
done 3< input1.txt 4< input2.txt
Reading a whole file into an array (Bash version 4+)
readarray -t my_array < my_file
or
mapfile -t my_array < my_file
And then
for line in "${my_array[@]}"; do
# process the lines
done
#!/bin/bash
#
# Change the file name from "test" to desired input file
# (The comments in bash are prefixed with #'s)
for x in $(cat test.txt)
do
echo $x
done
$ cat /tmp/test.txt
Line 1
Line 2 has leading space
Line 3 followed by blank line
Line 5 (follows a blank line) and has trailing space
Line 6 has no ending CR
There are four elements that will alter the meaning of the file output read by many Bash solutions:
The blank line 4;
Leading or trailing spaces on two lines;
Maintaining the meaning of individual lines (i.e., each line is a record);
The line 6 not terminated with a CR.
If you want the text file line by line including blank lines and terminating lines without
CR, you must use a while loop and you must have an alternate test for the final line.
Here are the methods that may change the file (in comparison to what cat returns):
1) Lose the last line and leading and trailing spaces:
$ while read -r p; do printf "%s\n" "'$p'"; done </tmp/test.txt
'Line 1'
'Line 2 has leading space'
'Line 3 followed by blank line'
''
'Line 5 (follows a blank line) and has trailing space'
(If you do while IFS= read -r p; do printf "%s\n" "'$p'"; done </tmp/test.txt
instead, you preserve the leading and trailing spaces but still lose the last line if it is not
terminated with CR)
2) Using process substitution with cat will reads the entire file in one gulp
and loses the meaning of individual lines:
$ for p in "$(cat /tmp/test.txt)"; do printf "%s\n" "'$p'"; done
'Line 1
Line 2 has leading space
Line 3 followed by blank line
Line 5 (follows a blank line) and has trailing space
Line 6 has no ending CR'
(If you remove the " from $(cat /tmp/test.txt) you read the file
word by word rather than one gulp. Also probably not what is intended...)
The most robust and simplest way to read a file line-by-line and preserve all spacing is:
$ while IFS= read -r line || [[ -n $line ]]; do printf "'%s'\n" "$line"; done </tmp/test.txt
'Line 1'
' Line 2 has leading space'
'Line 3 followed by blank line'
''
'Line 5 (follows a blank line) and has trailing space '
'Line 6 has no ending CR'
If you want to strip leading and trading spaces, remove the IFS= part:
$ while read -r line || [[ -n $line ]]; do printf "'%s'\n" "$line"; done </tmp/test.txt
'Line 1'
'Line 2 has leading space'
'Line 3 followed by blank line'
''
'Line 5 (follows a blank line) and has trailing space'
'Line 6 has no ending CR'
(A text file without a terminating \n , while fairly common, is considered broken
under POSIX. If you can count on the trailing \n you do not need || [[ -n $line
]] in the while loop.)
Here is my real life example how to loop lines of another program output, check for substrings,
drop double quotes from variable, use that variable outside of the loop. I guess quite many is
asking these questions sooner or later.
##Parse FPS from first video stream, drop quotes from fps variable
## streams.stream.0.codec_type="video"
## streams.stream.0.r_frame_rate="24000/1001"
## streams.stream.0.avg_frame_rate="24000/1001"
FPS=unknown
while read -r line; do
if [[ $FPS == "unknown" ]] && [[ $line == *".codec_type=\"video\""* ]]; then
echo ParseFPS $line
FPS=parse
fi
if [[ $FPS == "parse" ]] && [[ $line == *".r_frame_rate="* ]]; then
echo ParseFPS $line
FPS=${line##*=}
FPS="${FPS%\"}"
FPS="${FPS#\"}"
fi
done <<< "$(ffprobe -v quiet -print_format flat -show_format -show_streams -i "$input")"
if [ "$FPS" == "unknown" ] || [ "$FPS" == "parse" ]; then
echo ParseFPS Unknown frame rate
fi
echo Found $FPS
Declare variable outside of the loop, set value and use it outside of loop requires done
<<< "$(...)" syntax. Application need to be run within a context of current console. Quotes
around the command keeps newlines of output stream.
Loop match for substrings then reads name=value pair, splits right-side part of last
= character, drops first quote, drops last quote, we have a clean value to be used elsewhere.
This booklet is designed to help with common tasks on a Linux system. It is designed to be presentable
as a series of "recipes" for accomplishing common tasks. These recipes consist of a plain English
one-line description, followed by the Linux command which carries out the task.
The document is focused on performing tasks in Linux using the 'command line' or 'console'.
The format of the booklet was largely inspired by the "Linux Cookbook"
www.dsl.org/cookbook
#!/bin/bash
# Script to backup the /etc heirarchy
#
# Written 4/2002 by Wayne Pollock, Tampa Florida USA
#
# $Id: backup-etc,v 1.6 2004/08/25 01:42:26 wpollock Exp $
#
# $Log: backup-etc,v $
# Revision 1.6 2004/08/25 01:42:26 wpollock
# Changed backup name to include the hostname and 4 digit years.
#
# Revision 1.5 2004/01/07 18:07:33 wpollock
# Fixed dots routine to count files first, then calculate files per dot.
#
# Revision 1.4 2003/04/03 08:10:12 wpollock
# Changed how the version number is obtained, so the file
# can be checked out normally.
#
# Revision 1.3 2003/04/03 08:01:25 wpollock
# Added ultra-fancy dots function for verbose mode.
#
# Revision 1.2 2003/04/01 15:03:33 wpollock
# Eliminated the use of find, and discovered that tar was working
# as intended all along! (Each directory that find found was
# recursively backed-up, so for example /etc, then /etc/mail,
# caused /etc/mail/sendmail.mc to be backuped three times.)
#
# Revision 1.1 2003/03/23 18:57:29 wpollock
# Modified by Wayne Pollock:
#
# Discovered not all files were being backed up, so
# added "-print0 --force-local" to find and "--null -T -"
# to tar (eliminating xargs), to fix the problem when filenames
# contain metacharacters such as whitespace.
# Although this now seems to work, the current version of tar
# seems to have a bug causing it to backup every file two or
# three times when using these options! This is still better
# than not backing up some files at all.)
#
# Changed the logger level from "warning" to "error".
#
# Added '-v, --verbose' options to display dots every 60 files,
# just to give feedback to a user.
#
# Added '-V, --version' and '-h, --help' options.
#
# Removed the lock file mechanism and backup file renaming
# (from foo to foo.1), in favor of just including a time-stamp
# of the form "yymmdd-hhmm" to the filename.
#
#
PATH=/bin:/usr/bin
# The backups should probably be stored in /var somplace:
REPOSITORY=/root
TIMESTAMP=$(date '+%Y%m%d-%H%M')
HOSTNAME=$(hostname)
FILE="$REPOSITORY/$HOSTNAME-etc-full-backup-$TIMESTAMP.tgz"
ERRMSGS=/tmp/backup-etc.$$
PROG=${0##*/}
VERSION=$(echo $Revision: 1.6 $ |awk '{print$2}')
VERBOSE=off
usage()
{ echo "This script creates a full backup of /etc via tar in $REPOSITORY."
echo "Usage: $PROG [OPTIONS]"
echo ' Options:'
echo ' -v, --verbose displays some feedback (dots) during backup'
echo ' -h, --help displays this message'
echo ' -V, --version display program version and author info'
echo
}
dots()
{ MAX_DOTS=50
NUM_FILES=`find /etc|wc -l`
let 'FILES_PER_DOT = NUM_FILES / MAX_DOTS'
bold=`tput smso`
norm=`tput rmso`
tput sc
tput civis
echo -n "$bold(00%)$norm"
while read; do
let "cnt = (cnt + 1) % FILES_PER_DOT"
if [ "$cnt" -eq 0 ]
then
let '++num_dots'
let 'percent = (100 * num_dots) / MAX_DOTS'
[ "$percent" -gt "100" ] && percent=100
tput rc
printf "$bold(%02d%%)$norm" "$percent"
tput smir
echo -n "."
tput rmir
fi
done
tput cnorm
echo
}
# Command line argument processing:
while [ $# -gt 0 ]
do
case "$1" in
-v|--verbose) VERBOSE=on; ;;
-h|--help) usage; exit 0; ;;
-V|--version) echo -n "$PROG version $VERSION "
echo 'Written by Wayne Pollock '
exit 0; ;;
*) usage; exit 1; ;;
esac
shift
done
trap "rm -f $ERRMSGS" EXIT
cd /etc
# create backup, saving any error messages:
if [ "$VERBOSE" != "on" ]
then
tar -cz --force-local -f $FILE . 2> $ERRMSGS
else
tar -czv --force-local -f $FILE . 2> $ERRMSGS | dots
fi
# Log any error messages produced:
if [ -s "$ERRMSGS" ]
then logger -p user.error -t $PROG "$(cat $ERRMSGS)"
else logger -t $PROG "Completed full backup of /etc"
fi
exit 0
And now example buss words infused nonsense: ""DevOps is a concept, an idea, a life philosophy," says Gottfried Sehringer, chief marketing
officer at
XebiaLabs
, a software delivery
automation company. "It's not really a process or a toolset, or a technology." And another one:
..." "In an ideal world, you would push a button to release every few seconds," Sehringer says.
But this is not an ideal world, and so people plug up the process along the way."... "
I want to see sizable software product with the release every few seconds.
Even for a small and rapidly evolving web site scripts should be released no more frequently then
daily.
Notable quotes:
"... Even if you're a deity of software bug-squashing, how can you -- or any developer or operations specialist -- deliver high-quality, "don't break anything" code when you have to build and release that fast? Everyone has their own magic bullet. "Agile -- " cries one crowd. " Continuous build -- " yells another. " Continuous integration -- " cheers a third. ..."
"... Automation has obvious returns on investment. "You can make sure it's good in pre-production and push it immediately to production without breaking anything, and then just repeat, repeat, repeat, over and over again," says Sehringer. ..."
"... In other words, you move delivery through all the steps in a structured, repeatable, automated way to reduce risk and increase the speed of releases and updates. ..."
The
quickie guide to continuous delivery in DevOps
In today's world, you have to develop and
deliver almost in the same breath. Here's a quick guide to help you figure out which continuous
delivery concepts will help you breathe easy, and which are only hot air. Developers are always
under pressure to produce more and release software faster, which encourages the adoption of
new concepts and tools. But confusing buzzwords obfuscate real technology and business
benefits, particularly when a vendor has something to sell. That makes it hard to determine
what works best -- for real, not just as a marketing phrase -- in the continuous flow of build and
deliver processes. This article gives you the basics of continuous delivery to help you sort it
all out.
To start with, the terms apply to different parts of the same production arc, each of which
are automated to different degrees:
Continuous integration means frequently merging code into a central repository
.
"Frequently" means usually several times a day. Each merge triggers an automated "build and
test" instance, a process sometimes called continuous build . But by either name, continuous
integration and continuous build do nothing in terms of delivery or deployment. They're about
code management, not what happens to the code afterward.
Continuous delivery refers to the automation of the software release process
, which
includes some hands-on effort by developers. Usually, developers approve or initiate the
deployment, though there can be other manual steps as well.
Continuous deployment is continuous delivery with no manual steps for developers. The
whole thing is automated, and it requires not so much as a nod from humans.
With continuous deployment, "a developer's job typically ends at reviewing a pull request
from a teammate and merging it to the master branch," explains Marko Anastasov in a
blog post
. "A continuous integration/continuous deployment service takes over from there
by running all tests and deploying the code to production, while keeping the team informed
about [the] outcome of every important event."
However, knowing the terms and their definitions isn't enough to help you determine when and
where it is best to use each. Because, of course, every shop is different.
It would be great if the market clearly distinguished between concepts and tools and their
uses, as they do with terms like DevOps. Oh, wait.
"DevOps is a concept, an idea, a life philosophy," says Gottfried Sehringer, chief marketing
officer at
XebiaLabs
, a software delivery
automation company. "It's not really a process or a toolset, or a technology."
But, alas, industry terms are rarely spelled out that succinctly. Nor are they followed with
hints and tips on how and when to use them. Hence this guide, which aims to help you learn when
to use what.
Choose your accelerator according to your need for speed
That's not the end of it; some businesses push for software updates to be faster still. "If
you work for Amazon, it might be every few seconds," says Sehringer.
Even if you're a deity of software bug-squashing, how can you -- or any developer or operations
specialist -- deliver high-quality, "don't break anything" code when you have to build and release
that fast? Everyone has their own magic bullet. "Agile -- " cries one crowd. "
Continuous
build
-- " yells another. "
Continuous integration
-- "
cheers a third.
Let's just cut to the chase on all that, shall we?
"Just think of continuous as 'automated,'" says Nate Berent-Spillson, senior delivery
director at
Nexient
, a software
services provider. "Automation is driving down cost and the time to develop and deploy."
Well, frack, why don't people just say automation?
Add to the idea of automation the concepts of continuous build, continuous delivery,
continuous everything, which are central to DevOps, and we find ourselves talking in circles.
So, let's get right to sorting all that out.
... ... ...
Rinse. Repeat, repeat, repeat, repeat (the point of automation in
DevOps)
Automation has obvious returns on investment. "You can make sure it's good in pre-production
and push it immediately to production without breaking anything, and then just repeat, repeat,
repeat, over and over again," says Sehringer.
In other words, you move delivery through all the steps in a structured, repeatable,
automated way to reduce risk and increase the speed of releases and updates.
In an ideal world, you would push a button to release every few seconds," Sehringer says.
But this is not an ideal world, and so people plug up the process along the way.
A company may need approval for an application change from its legal department. "Some
companies are heavily regulated and may need additional gates to ensure compliance," notes
Sehringer. "It's important to understand where these bottlenecks are." The ARA software should
improve efficiencies and ensure the application is released or updated on schedule.
"Developers are more familiar with continuous integration," he says. "Application release
automation is more recent and thus less understood."
... ... ...
Pam Baker has written hundreds of articles published in leading technology, business and
finance publications including InformationWeek, Institutional Investor magazine, CIO.com,
NetworkWorld, ComputerWorld, IT World, Linux World, and more. She has also authored several
analytical studies on technology, eight books -- the latest of which is Data Divination: Big
Data Strategies -- and an award-winning documentary on paper-making. She is a member of the
National Press Club, Society of Professional Journalists and the Internet Press Guild.
I have one older ubuntu server, and one newer debian server and I am migrating data from the old
one to the new one. I want to use rsync to transfer data across to make final migration easier and
quicker than the equivalent tar/scp/untar process.
As an example, I want to sync the home folders one at a time to the new server. This requires
root access at both ends as not all files at the source side are world readable and the destination
has to be written with correct permissions into /home. I can't figure out how to give rsync root
access on both sides.
I've seen a few related questions, but none quite match what I'm trying to do.
Actually you do NOT need to allow root authentication via SSH to run rsync as Antoine suggests.
The transport and system authentication can be done entirely over user accounts as long as
you can run rsync with sudo on both ends for reading and writing the files.
As a user on your destination server you can suck the data from your source server like
this:
The user you run as on both servers will need passwordless* sudo access to the rsync binary,
but you do NOT need to enable ssh login as root anywhere. If the user you are using doesn't
match on the other end, you can add user@boron: to specify a different remote user.
Good luck.
*or you will need to have entered the password manually inside the timeout window.
Although this is an old question I'd like to add word of CAUTION to this
accepted answer. From my understanding allowing passwordless "sudo rsync" is equivalent
to open the root account to remote login. This is because with this it is very easy
to gain full root access, e.g. because all system files can be downloaded, modified
and replaced without a password. –
Ascurion
Jan 8 '16 at 16:30
Good point. In a trusted environment, you'll pick up a lot of speed by not encrypting.
It might not matter on small files, but with GBs of data it will. –
pboin
May 18 '10 at 10:53
How do I use the rsync tool to copy
only the hidden files and directory (such as ~/.ssh/, ~/.foo, and so on) from /home/jobs directory
to the /mnt/usb directory under Unix like operating system?
The rsync program is used for synchronizing files over a network or local disks. To view or display
only hidden files with ls command:
ls -ld ~/.??*
OR
ls -ld ~/.[^.]*
Sample outputs:
Fig:01 ls command to view only hidden files
rsync not synchronizing all hidden .dot files?
In this example, you used the pattern .[^.]* or .??* to
select and display only hidden files using ls command . You can use the same pattern with any
Unix command including rsync command. The syntax is as follows to copy hidden files with rsync:
In this example, copy all hidden files from my home directory to /mnt/test:
rsync -avzP ~/.[^.]* /mnt/test
rsync -avzP ~/.[^.]* /mnt/test
Sample outputs:
Fig.02 Rsync example to copy only hidden files
Vivek Gite is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating
system/Unix shell scripting. He has worked with global clients and in various industries, including
IT, education, defense and space research, and the nonprofit sector. Follow him on
Twitter ,
Facebook ,
Google+ .
"... It's basically a bullshit bingo post where someone repeats a buzzword without any knowledge of the material behind it ..."
"... continuous delivery == constant change ..."
"... This might be good for developers, but it's a nightmare for the poor, bloody, customers. ..."
"... However, I come at it from the other side, the developers just push new development out and production support is responsible for addressing the mess, it is horrible, there is too much disconnect between developers and their resulting output creating consistent outages. The most successful teams follow the mantra "Eat your own dog food" , developers who support the crap they push ..."
"... But do you know who likes Continuous Delivery? Not the users. The users hate stuff changing for the sake of change, but trying to convince management seems an impossible task. ..."
"... some of us terrified what 'continious delivery' means in the context of software in the microcontroller of a health care device, ..."
"... It is a natural consequence of a continuous delivery, emphasis on always evolving and changing and that the developer is king and no one can question developer opinion. Developer decides it should move, it moves. No pesky human testers to stand up and say 'you confused the piss out of us' to make them rethink it. No automatic test is going to capture 'confuse the piss out of the poor non-developer users'. ..."
"... It's amazing how common this attitude has become. It's aggressively anti-customer, and a big part of the reason for the acceleration of the decline of software quality over the past several years. ..."
"... All I know is that, as a user, rapid-release or continuous delivery has been nothing but an enormous pain in the ass to me and I wish it would die the horrible death it deserves already. ..."
Yeah, this is an incredibly low quality article. It doesn't specify what it means by what AI
should do, doesn't specify which type of AI, doesn't specify why AI should be used, etc. Junk
article.
It's basically a bullshit bingo post where someone repeats a buzzword without any knowledge
of the material behind it.
Here is the actual model, a model that will exist for the next 1,000 years.
1. Someone (or something) gathers requirement. 2. They get it wrong. 3. They develop the wrong
thing that doesn't even work they way they thought it should 4. The project leader is canned 5.
The software is implemented by an outside vendor, with all the flaws. 6. The software operates
finally after 5 years of modifications to both the software and the workflows (to match the flaws
in the software). 7. As soon as it's all running properly and everyone is trained, a new project
is launched to redo it, "the right way". 8. Goto 1
Here is the actual model, a model that will exist for the next 1,000 years.
1. Someone (or something) gathers requirement. 2. They get it wrong. 3. They develop
the wrong thing that doesn't even work they way they thought it should 4. The project leader
is canned 5. The software is implemented by an outside vendor, with all the flaws. 6. The software
operates finally after 5 years of modifications to both the software and the workflows (to
match the flaws in the software). 7. As soon as it's all running properly and everyone is trained,
a new project is launched to redo it, "the right way". 8. Goto 1
You just accurately described a 6 year project within our organization....and it made me cry
Does this model have a name? an urban dictionary name? if not it needs one.
Yeah, maybe there's something useful in TFA, but I'm not really inclined to go looking based
on what was in the summary. At no point, did the person being quoted actually say anything of
substance.
It's just buzzword soup with a dash of new technologies thrown in.
Five years ago they would have said practically the same words, but just talked about
utilizing the cloud instead of AI.
I'm also a little skeptical of any study published by a company looking to sell you what the
study has just claimed to be great. That doesn't mean its a complete sham, but how hard did they
look for other explanations why some companies are more successful than others?
I notice the targets are all set from the company's point of view... including customer satisfaction.
However it's quite easy to meet any goal, as long as you set it low enough.
Companies like Comcast or Qwest objectively have abysmal customer satisfaction ratings; but
they likely meet their internal goal for that metric. I notice, in their public communications,
they always use phrasing along the lines of "giving you an even better customer service experience"
- again, the trick is to set the target low and
This might be good for developers, but it's a nightmare for the poor, bloody, customers.
Any professional outfit will test a new release (in-house or commercial product) thoroughly
before letting it get anywhere close to an environment where their business is at stake.
This process can take anywhere from a day or two to several months, depending on the complexity
of the operation, the scope of the changes, HOW MANY (developers note: not if any ) bugs
are found and whether any alterations to working practices have to be introduced.
So to have developers lob a new "release" over the wall at frequent intervals is not useful,
it isn't clever, nor does it save (the users) any money or speed up their acceptance. It just
costs more in integration testing, floods the change control process with "issues" and means that
when you report (again, developers: not if ) problems, it is virtually impossible to describe
exactly which release you are referring to and even more impossible for whoever fixes the bugs
to produce the same version to fix and then incorporate those fixes into whatever happens to be
the latest version - that hour. Even more so when dozens of major corporate customers are ALL
reporting bugs with each new version they test.
Any professional outfit will test a new release (in-house or commercial product) thoroughly
before letting it get anywhere close to an environment where their business is at stake. This
process can take anywhere from a day or two to several months, depending on the complexity of
the operation, the scope of the changes, HOW MANY (developers note: not if any) bugs are found
and whether any alterations to working practices have to be introduced.
I wanted to chime in with a tangible anecdote to support your
I can sympathize with that few, of it appearing to have too many developers focused upon deployment/testing
then actual development.
However, I come at it from the other side, the developers just push new development out
and production support is responsible for addressing the mess, it is horrible, there is too much
disconnect between developers and their resulting output creating consistent outages. The most
successful teams follow the mantra "Eat your own dog food" , developers who support the crap they
push
But do you know who likes Continuous Delivery? Not the users. The users hate stuff changing
for the sake of change, but trying to convince management seems an impossible task.
Why should users not like it? If you shop on amazon you don't know if a specific feature you
notice today came there via continuous delivery or a more traditional process.
The crux of the problem is that we (in these discussions and the analysts) describe *all* manner
of 'software development' as the same thing. Whether it's a desktop application, an embedded microcontroller
in industrial equipment, a web application for people to get work done, or a webapp to let people
see the latest funny cat video.
Then we start talking past each other, some of us terrified what 'continious delivery'
means in the context of software in the microcontroller of a health care device, others t
Well, 'continuous delievery' is a term with a defined meaning. And releasing phone apps with
unwanted UI/functionality in rapid succession is not part of that definition. Continuous delievery
basically only is the next logical step after continuous integration. You deploy the new functionallity
automatically (or with a click of a button) when certain test criteria are met. Usually on a subset
of your nodes so only a subset of your customers sees it. If you have crashes on those nodes or
customer complaints you
You deploy the new functionallity automatically (or with a click of a button) when certain
test criteria are met. Usually on a subset of your nodes so only a subset of your customers
sees it. If you have crashes on those nodes or customer complaints you roll back.
Why do you consider this to be a good thing? It's certainly not for those poor customers who
were chosen to be involuntary beta testers, and it's also not for the rest of the customers who
have to deal with software that is constantly changing underneath them.
'continuous delievery' is a term with a defined meaning. And releasing phone apps with unwanted
UI/functionality in rapid succession is not part of that definition.
It is a natural consequence of a continuous delivery, emphasis on always evolving and changing
and that the developer is king and no one can question developer opinion. Developer decides it
should move, it moves. No pesky human testers to stand up and say 'you confused the piss out of
us' to make them rethink it. No automatic test is going to capture 'confuse the piss out of the
poor non-developer users'.
If you have crashes on those nodes or customer complaints you roll back.
Note that a customer with a choice is likely to just go somewhere else rather than use your
software.
IT in my company does network, Windows, Office and Virus etc. type of work. Is this what they
talk about? Anyway, it's been long outsourced to IT (as in "Indian" technology)...
I recently interviewed at a couple of the new fangled big data marketing startups that correlate
piles of stuff to help target ads better, and they were continuously deploying up the wazoo. In
fact, they had something like zero people doing traditional QA.
It was not totally insane at all. But they did have a blaze attitude about deployments -- if
stuff don't work in production they just roll back, and not worry about customer input data being
dropped on the floor. Heck, they did not worry much about da
But they did have a blaze attitude about deployments -- if stuff don't work in production
they just roll back, and not worry about customer input data being dropped on the floor.
It's amazing how common this attitude has become. It's aggressively anti-customer, and
a big part of the reason for the acceleration of the decline of software quality over the past
several years.
You want your deployment system to be predictable, and as my old AI professor used to say,
intelligent means hard to predict. You don't want AI for systems that just have to do the exact
same thing reliably over and over again.
All I know is that, as a user, rapid-release or continuous delivery has been nothing but
an enormous pain in the ass to me and I wish it would die the horrible death it deserves already.
Using ssh means encryption, which makes things slower. --force does only affect
directories, if I read the man page correctly. –
Torsten Bronger
Jan 1 '13 at 23:08
Unless your using ancient kit, the CPU overhead of encrypting / decrypting the
traffic shouldn't be noticeable, but you will loose 10-20% of your bandwidth,
through the encapsulation process. Then again 80% of a working link is better than
100% of a non working one :) –
arober11
Jan 2 '13 at 10:52
do
have an "ancient kit". ;-) (Slow ARM CPU on a NAS.) But I now mount
the NAS with NFS and use rsync (with "sudo") locally. This solves the problem (and
is even faster). However, I still think that my original problem must be solvable
using the rsync protocol (remote, no ssh). –
Torsten Bronger
Jan 4 '13 at 7:55
On my Ubuntu server there are about 150 shell accounts. All usernames begin with the prefix
u12.. I have root access and I am trying to copy a directory named "somefiles" to all the
home directories. After copying the directory the user and group ownership of the directory
should be changed to user's. Username, group and home-dir name are same. How can this be
done?
Do the copying as the target user. This will automatically make the target files. Make sure
that the original files are world-readable (or at least readable by all the target users).
Run chmod afterwards if you don't want the copied files to be world-readable.
getent passwd |
awk -F : '$1 ~ /^u12/ {print $1}' |
while IFS= read -r user; do
su "$user" -c 'cp -Rp /original/location/somefiles ~/'
done
I am using rsync to replicate a web folder structure from a local server to a remote server.
Both servers are ubuntu linux. I use the following command, and it works well:
The usernames for the local system and the remote system are different. From what I have
read it may not be possible to preserve all file and folder owners and groups. That is OK,
but I would like to preserve owners and groups just for the www-data user, which does exist
on both servers.
Is this possible? If so, how would I go about doing that?
I ended up getting the desired affect thanks to many of the helpful comments and answers
here. Assuming the IP of the source machine is 10.1.1.2 and the IP of the destination machine
is 10.1.1.1. I can use this line from the destination machine:
This preserves the ownership and groups of the files that have a common user name, like
www-data. Note that using
rsync
without
sudo
does not preserve
these permissions.
This lets you authenticate as
user
on targethost, but still get privileged
write permission through
sudo
. You'll have to modify your sudoers file on the
target host to avoid sudo's request for your password.
man sudoers
or run
sudo visudo
for instructions and samples.
You mention that you'd like to retain the ownership of files owned by www-data, but not
other files. If this is really true, then you may be out of luck unless you implement
chown
or a second run of
rsync
to update permissions. There is no
way to tell rsync to preserve ownership for
just one user
.
That said, you should read about rsync's
--files-from
option.
As far as I know, you cannot
chown
files to somebody else than you, if you are
not root. So you would have to
rsync
using the
www-data
account, as
all files will be created with the specified user as owner. So you need to
chown
the files afterwards.
The root users for the local system and the remote system are different.
What does this mean? The
root
user is uid 0. How are they different?
Any user with read permission to the directories you want to copy can determine what
usernames own what files. Only root can change the ownership of files being
written
.
You're currently running the command on the source machine, which restricts your writes to
the permissions associated with [email protected]. Instead, you can try to run the command
as
root
on the
target
machine. Your
read
access on the source machine
isn't an issue.
So on the target machine (10.1.1.1), assuming the source is 10.1.1.2:
Also, set up access to [email protected] using a DSA or RSA key, so that you can avoid having
passwords floating around. For example, as root on your target machine, run:
# ssh-keygen -d
Then take the contents of the file
/root/.ssh/id_dsa.pub
and add it to
~user/.ssh/authorized_keys
on the source machine. You can
ssh
[email protected] as root from the target machine to see if it works. If you get a
password prompt, check your error log to see why the key isn't working.
I'm trying to use rsync to copy a set of files from one system to another. I'm running
the command as a normal user (not root). On the remote system, the files are owned by
apache and when copied they are obviously owned by the local account (fred).
My problem is that every time I run the rsync command, all files are re-synched even
though they haven't changed. I think the issue is that rsync sees the file owners are
different and my local user doesn't have the ability to change ownership to apache, but
I'm not including the
-a
or
-o
options so I thought this would
not be checked. If I run the command as root, the files come over owned by apache and do
not come a second time if I run the command again. However I can't run this as root for
other reasons. Here is the command:
Why can't you run rsync as root? On the remote system, does fred have read
access to the apache-owned files? –
chrishiestand
May 3 '11 at 0:32
Ah, I left out the fact that there are ssh keys set up so that local fred can
become remote root, so yes fred/root can read them. I know this is a bit convoluted
but its real. –
Fred Snertz
May 3 '11 at 14:50
Always be careful when root can ssh into the machine. But if you have password
and challenge response authentication disabled it's not as bad. –
chrishiestand
May 3 '11 at 17:32
-c, --checksum
This changes the way rsync checks if the files have been changed and are in need of a transfer. Without this option,
rsync uses a "quick check" that (by default) checks if each file's size and time of last modification match between the
sender and receiver. This option changes this to compare a 128-bit checksum for each file that has a matching size.
Generating the checksums means that both sides will expend a lot of disk I/O reading all the data in the files in the
transfer (and this is prior to any reading that will be done to transfer changed files), so this can slow things down
significantly.
The sending side generates its checksums while it is doing the file-system scan that builds the list of the available
files. The receiver generates its checksums when it is scanning for changed files, and will checksum any file that has
the same size as the corresponding sender's file: files with either a changed size or a changed checksum are selected
for transfer.
Note that rsync always verifies that each transferred file was correctly reconstructed on the receiving side by checking
a whole-file checksum that is generated as the file is transferred, but that automatic after-the-transfer verification
has nothing to do with this option's before-the-transfer "Does this file need to be updated?" check.
For protocol 30 and beyond (first supported in 3.0.0), the checksum used is MD5. For older protocols, the checksum used
is MD4.
I have a bash script which uses
rsync
to backup files in Archlinux. I noticed
that
rsync
failed to copy a file from
/sys
, while
cp
worked just fine:
# rsync /sys/class/net/enp3s1/address /tmp
rsync: read errors mapping "/sys/class/net/enp3s1/address": No data available (61)
rsync: read errors mapping "/sys/class/net/enp3s1/address": No data available (61)
ERROR: address failed verification -- update discarded.
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1052) [sender=3.0.9]
# cp /sys/class/net/enp3s1/address /tmp ## this works
I wonder why does
rsync
fail, and is it possible to copy the file with
it?
Rsync has
code
which specifically checks if a file is truncated during read and gives this error !
ENODATA
. I don't know
why
the files in
/sys
have this
behavior, but since they're not real files, I guess it's not too surprising. There doesn't
seem to be a way to tell rsync to skip this particular check.
I think you're probably better off not rsyncing
/sys
and using specific
scripts to cherry-pick out the particular information you want (like the network card
address).
First off
/sys
is a
pseudo file system
. If you look at
/proc/filesystems
you will find a list of registered file systems where quite a
few has
nodev
in front. This indicates they are
pseudo filesystems
.
This means they exists on a running kernel as a RAM-based filesystem. Further they do not
require a block device.
Further you can do a
stat
on a file and notice another distinct feature; it
occupies 0 blocks. Also inode of root (stat /sys) is 1.
/stat/fs
typically has
inode 2. etc.
rsync vs. cp
The easiest explanation for rsync failure of synchronizing pseudo files is perhaps by
example.
Say we have a file named
address
that is 18 bytes. An
ls
or
stat
of the file reports 4096 bytes.
rsync
Opens file descriptor, fd.
Uses fstat(fd) to get information such as size.
Set out to read size bytes, i.e. 4096. That would be
line 253
of the code linked by
@mattdm
.
read_size ==
4096
Ask; read: 4096 bytes.
A short string is read i.e. 18 bytes.
nread == 18
read_size = read_size - nread (4096 - 18 = 4078)
Ask; read: 4078 bytes
0 bytes read (as first read consumed all bytes in file).
During this process it actually reads the entire file. But with no size available it
cannot validate the result – thus failure is only option.
cp
Opens file descriptor, fd.
Uses fstat(fd) to get information such as st_size (also uses lstat and stat).
Check if file is likely to be sparse. That is the file has holes etc.
copy.c:1010
/* Use a heuristic to determine whether SRC_NAME contains any sparse
* blocks. If the file has fewer blocks than would normally be
* needed for a file of its size, then at least one of the blocks in
* the file is a hole. */
sparse_src = is_probably_sparse (&src_open_sb);
As
stat
reports file to have zero blocks it is categorized as sparse.
Tries to read file by extent-copy (a more efficient way to copy
normal
sparse
files), and fails.
Copy by sparse-copy.
Starts out with max read size of MAXINT.
Typically
18446744073709551615
bytes on a 32 bit system.
Ask; read 4096 bytes. (Buffer size allocated in memory from stat information.)
A short string is read i.e. 18 bytes.
Check if a hole is needed, nope.
Write buffer to target.
Subtract 18 from max read size.
Ask; read 4096 bytes.
0 bytes as all got consumed in first read.
Return success.
All OK. Update flags for file.
FINE.
,
Might be related, but extended attribute calls will fail on sysfs:
[root@hypervisor eth0]# lsattr address
lsattr: Inappropriate ioctl for device While reading flags on address
[root@hypervisor eth0]#
Looking at my strace it looks like rsync tries to pull in extended attributes by
default:
22964 <... getxattr resumed> , 0x7fff42845110, 132) = -1 ENODATA (No data
available)
I tried finding a flag to give rsync to see if skipping extended attributes resolves the
issue but wasn't able to find anything (
--xattrs
turns them
on
at the
destination).
I'm having some trouble with rsync. I'm trying to sync my local /etc directory to a remote
server, but this won't work.
The problem is that it seems he doesn't copy all the files.
The local /etc dir contains 15MB of data, after a rsync, the remote backup contains only 4.6MB
of data.
Scormen May 31st, 2009, 11:05 AM I found that if I do a local sync, everything goes fine.
But if I do a remote sync, it copies only 4.6MB.
Any idea?
LoneWolfJack May 31st, 2009, 05:14 PM never used rsync on a remote machine, but "sudo rsync"
looks wrong. you probably can't call sudo like that so the ssh connection needs to have the
proper privileges for executing rsync.
just an educated guess, though.
Scormen May 31st, 2009, 05:24 PM Thanks for your answer.
In /etc/sudoers I have added next line, so "sudo rsync" will work.
kris ALL=NOPASSWD: /usr/bin/rsync
I also tried without --rsync-path="sudo rsync", but without success.
I have also tried on the server to pull the files from the laptop, but that doesn't work
either.
LoneWolfJack May 31st, 2009, 05:30 PM in the rsync help file it says that --rsync-path is for
the path to rsync on the remote machine, so my guess is that you can't use sudo there as it
will be interpreted as a path.
so you will have to do --rsync-path="/path/to/rsync" and make sure the ssh login has root
privileges if you need them to access the files you want to sync.
--rsync-path="sudo rsync" probably fails because
a) sudo is interpreted as a path
b) the space isn't escaped
c) sudo probably won't allow itself to be called remotely
again, this is not more than an educated guess.
Scormen May 31st, 2009, 05:45 PM I understand what you mean, so I tried also:
sending incremental file list
rsync: recv_generator: failed to stat "/home/kris/backup/laptopkris/etc/chatscripts/pap":
Permission denied (13)
rsync: recv_generator: failed to stat "/home/kris/backup/laptopkris/etc/chatscripts/provider":
Permission denied (13)
rsync: symlink "/home/kris/backup/laptopkris/etc/cups/ssl/server.crt" ->
"/etc/ssl/certs/ssl-cert-snakeoil.pem" failed: Permission denied (13)
rsync: symlink "/home/kris/backup/laptopkris/etc/cups/ssl/server.key" ->
"/etc/ssl/private/ssl-cert-snakeoil.key" failed: Permission denied (13)
rsync: recv_generator: failed to stat "/home/kris/backup/laptopkris/etc/ppp/peers/provider":
Permission denied (13)
rsync: recv_generator: failed to stat
"/home/kris/backup/laptopkris/etc/ssl/private/ssl-cert-snakeoil.key": Permission denied
(13)
sent 86.85K bytes received 306 bytes 174.31K bytes/sec
total size is 8.71M speedup is 99.97
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at
main.c(1058) [sender=3.0.5]
And the same command with "root" instead of "kris".
Then, I get no errors, but I still don't have all the files synced.
Scormen June 1st, 2009, 09:00 AM Sorry for this bump.
I'm still having the same problem.
Any idea?
Thanks.
binary10 June 1st, 2009, 10:36 AM I understand what you mean, so I tried also:
And the same command with "root" instead of "kris".
Then, I get no errors, but I still don't have all the files synced.
Maybe there's a nicer way but you could place /usr/bin/rsync into a private protected area
and set the owner to root place the sticky bit on it and change your rsync-path argument such
like:
# on the remote side, aka [email protected]
mkdir priv-area
# protect it from normal users running a priv version of rsync
chmod 700 priv-area
cd priv-area
cp -p /usr/local/bin/rsync ./rsync-priv
sudo chown 0:0 ./rsync-priv
sudo chmod +s ./rsync-priv
ls -ltra # rsync-priv should now be 'bold-red' in bash
Looking at your flags, you've specified a cvs ignore factor, ignore files that are updated
on the target, and you're specifying a backup of removed files.
From those qualifiers you're not going to be getting everything sync'd. It's doing what
you're telling it to do.
If you really wanted to perform a like for like backup.. (not keeping stuff that's been
changed/deleted from the source. I'd go for something like the following.
Remove the --dry-run and -i when you're happy with the output, and it should do what you
want. A word of warning, I get a bit nervous when not seeing trailing (/) on directories as it
could lead to all sorts of funnies if you end up using rsync on softlinks.
Scormen June 1st, 2009, 12:19 PM Thanks for your help, binary10.
I've tried what you have said, but still, I only receive 4.6MB on the remote server.
Thanks for the warning, I'll not that!
Did someone already tried to rsync their own /etc to a remote system? Just to know if this
strange thing only happens to me...
Thanks.
binary10 June 1st, 2009, 01:22 PM Thanks for your help, binary10.
I've tried what you have said, but still, I only receive 4.6MB on the remote server.
Thanks for the warning, I'll not that!
Did someone already tried to rsync their own /etc to a remote system? Just to know if this
strange thing only happens to me...
Thanks.
Ok so I've gone back and looked at your original post, how are you calculating 15MB of data
under etc - via a du -hsx /etc/ ??
I do daily drive to drive backup copies via rsync and drive to network copies.. and have
used them recently for restoring.
Sure my du -hsx /etc/ reports 17MB of data of which 10MB gets transferred via an rsync. My
backup drives still operate.
rsync 3.0.6 has some fixes to do with ACLs and special devices rsyncing between solaris. but
I think 3.0.5 is still ok with ubuntu to ubuntu systems.
Here is my test doing exactly what you you're probably trying to do. I even check the remote
end..
Number of files: 3121
Number of files transferred: 1812
Total file size: 10.04M bytes
Total transferred file size: 10.00M bytes
Literal data: 10.00M bytes
Matched data: 0 bytes
File list size: 109.26K
File list generation time: 0.002 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 10.20M
Total bytes received: 38.70K
sent 10.20M bytes received 38.70K bytes 4.09M bytes/sec
total size is 10.04M speedup is 0.98
binary10@jsecx25:~/bin-priv$ sudo du -hsx /etc/
17M /etc/
binary10@jsecx25:~/bin-priv$
And then on the remote system I do the du -hsx
binary10@lenovo-n200:/home/kris/backup/laptopkris/etc$ cd ..
binary10@lenovo-n200:/home/kris/backup/laptopkris$ sudo du -hsx etc
17M etc
binary10@lenovo-n200:/home/kris/backup/laptopkris$
Scormen June 1st, 2009, 01:35 PM ow are you calculating 15MB of data under etc - via a du -hsx
/etc/ ??
Indeed, on my laptop I see:
root@laptopkris:/home/kris# du -sh /etc/
15M /etc/
If I do the same thing after a fresh sync to the server, I see:
root@server:/home/kris# du -sh /home/kris/backup/laptopkris/etc/
4.6M /home/kris/backup/laptopkris/etc/
On both sides, I have installed Ubuntu 9.04, with version 3.0.5 of rsync.
So strange...
binary10 June 1st, 2009, 01:45 PM it does seem a bit odd.
I'd start doing a few diffs from the outputs find etc/ -printf "%f %s %p %Y\n" | sort
And see what type of files are missing.
- edit - Added the %Y file type.
Scormen June 1st, 2009, 01:58 PM Hmm, it's going stranger.
Now I see that I have all my files on the server, but they don't have their full size (bytes).
I have uploaded the files, so you can look into them.
binary10 June 1st, 2009, 02:16 PM If you look at the files that are different aka the ssl's
they are links to local files else where aka linked to /usr and not within /etc/
aka they are different on your laptop and the server
Scormen June 1st, 2009, 02:25 PM I understand that soft links are just copied, and not the
"full file".
But, you have run the same command to test, a few posts ago.
How is it possible that you can see the full 15MB?
binary10 June 1st, 2009, 02:34 PM I was starting to think that this was a bug with du.
The de-referencing is a bit topsy.
If you rsync copy the remote backup back to a new location back onto the laptop and do the
du command. I wonder if you'll end up with 15MB again.
Scormen June 1st, 2009, 03:20 PM Good tip.
On the server side, the backup of the /etc was still 4.6MB.
I have rsynced it back to the laptop, to a new directory.
If I go on the laptop to that new directory and do a du, it says 15MB.
binary10 June 1st, 2009, 03:34 PM Good tip.
On the server side, the backup of the /etc was still 4.6MB.
I have rsynced it back to the laptop, to a new directory.
If I go on the laptop to that new directory and do a du, it says 15MB.
I think you've now confirmed that RSYNC DOES copy everything.. just tht du confusing what
you had expected by counting the end link sizes.
It might also think about what you're copying, maybe you need more than just /etc of course
it depends on what you are trying to do with the backup :)
enjoy.
Scormen June 1st, 2009, 03:37 PM Yeah, it seems to work well.
So, the "problem" where just the soft links, that couldn't be counted on the server side?
binary10 June 1st, 2009, 04:23 PM Yeah, it seems to work well.
So, the "problem" where just the soft links, that couldn't be counted on the server side?
The links were copied as links as per the design of the --archive in rsync.
The contents of the pointing links were different between your two systems. These being that
that reside outside of /etc/ in /usr And so DU reporting them differently.
Scormen June 1st, 2009, 05:36 PM Okay, I got it.
Many thanks for the support, binarty10!
Scormen June 1st, 2009, 05:59 PM Just to know, is it possible to copy the data from these links
as real, hard data?
Thanks.
binary10 June 2nd, 2009, 09:54 AM Just to know, is it possible to copy the data from these
links as real, hard data?
Thanks.
Yep absolutely
You should then look at other possibilities of:
-L, --copy-links transform symlink into referent file/dir
--copy-unsafe-links only "unsafe" symlinks are transformed
--safe-links ignore symlinks that point outside the source tree
-k, --copy-dirlinks transform symlink to a dir into referent dir
-K, --keep-dirlinks treat symlinked dir on receiver as dir
but then you'll have to start questioning why you are backing them up like that especially
stuff under /etc/. If you ever wanted to restore it you'd be restoring full files and not
symlinks the restore result could be a nightmare as well as create future issues (upgrades etc)
let alone your backup will be significantly larger, could be 150MB instead of 4MB.
Scormen June 2nd, 2009, 10:04 AM Okay, now I'm sure what its doing :)
Is it also possible to show on a system the "real disk usage" of e.g. that /etc directory? So,
without the links, that we get a output of 4.6MB.
Thank you very much for your help!
binary10 June 2nd, 2009, 10:22 AM What does the following respond with.
sudo du --apparent-size -hsx /etc
If you want the real answer then your result from a dry-run rsync will only be enough for
you.
My understanding of the UN is that it is the High Court of the World where fealty is paid
to empire that funds most of the political circus anyway...and speaking of funding or not,
read the following link and lets see what PavewayIV adds to the potential sickness we are
sleep walking into.
I read
here that the purpose of export in a shell is to make the variable available
to sub-processes started from the shell.
However, I have also read
here and here that
"Processes inherit their environment from their parent (the process which started them)."
If this is the case, why do we need export ? What am I missing?
Are shell variables not part of the environment by default? What is the difference?
Your assumption is that all shell variables are in the environment . This is incorrect.
The export command is what defines a name to be in the environment at all. Thus:
a=1
b=2
export b
results in the current shell knowing that $a expands to 1 and $b
to 2, but subprocesses will not know anything about a because it is not part of the
environment (even in the current shell).
Some useful tools:
set : Useful for viewing the current shell's parameters, exported-or-not
set -k : Sets assigned args in the environment. Consider f() {
set -k; env; }; f a=1
export : Tells the shell to put a name in the environment. Export and assignment
are two entirely different operations.
env : As an external command, env can only tell you about the
inherited environment, thus, it's useful for sanity checking.
env -i : Useful for clearing the environment before starting a subprocess.
Alternatives to export :
name=val command # Assignment before command exports that name to the command.
declare/local -x name # Exports name, particularly useful in shell functions
when you want to avoid exposing the name to outside scope.
====
There's a difference between shell variables and environment variables. If you define a shell
variable without export ing it, it is not added to the processes environment and thus
not inherited to its children.
Using export you tell the shell to add the shell variable to the environment. You
can test this using printenv (which just prints its environment to stdout, since it's a child-process you see the effect of export ing variables):
I am using startx to start the graphical environment. I have a very simple
.xinitrc which I will add things to as I set up the environment, but for now it
is as follows:
catwm
&
# Just a basic window manager, for testing.
xterm
The reason I background the WM and foreground terminal and not the other way around as often
is done, is because I would like to be able to come back to the virtual text console after
typing exit in xterm . This appears to work as described.
The problem is that the PS1 variable that currently is set to my preference
in /etc/profile.d/user.sh (which is sourced from /etc/profile supplied
by distro), does not appear to propagate to the environment of the xterm mentioned
above. The relevant process tree is as follows:
\_
bash
\_ xinit
home
user
/.
xinitrc
--
etc
X11
xinit
xserverrc
auth
tmp
serverauth
ggJna3I0vx
\_
usr
bin
nolisten tcp
auth
tmp
serverauth
ggJna3I0vx vt1
\_ sh
home
user
/.
xinitrc
\_
home
user
catwm
\_ xterm
\_ bash
The shell started by xterm appears to be interactive, the shell executing
.xinitrc however is not. I am ok with both, the assumptions about interactivity
seem to be perfectly valid, but now I have a non-interactive shell that spawns an interactive
shell indirectly, and the interactive shell has no chance to automatically inherit the prompt,
because the prompt was unset or otherwise made unavailable higher up the process tree.
Commands env and export list only variables which are exported.
$PS1 is usually not exported. Try echo $PS1 in your shell to see
actual value of $PS1 .
Non-interactive shells usually do not have $PS1 . Non-interactive bash
explicitly unsets $PS1 . 1
You can check if bash is interactive by echo $- . If the output contains
i then it is interactive. You can explicitly start interactive shell by using
the option on the command line: bash -i . Shell started with -c is
not interactive.
The /etc/profile script is read for a login shell. You can start the shell
as a login shell by: bash -l .
With bash shell the scripts /etc/bash.bashrc and ~/.bashrc
are usually used to set $PS1 . Those scripts are sourced when interactive non-login
shell is started. It is your case in the xterm .
Start the shell inside xterm as a login shell bash -l . Check
if /etc/profile and ~/.profile do not contain code which should
be executed only after login. Maybe slight modifications of the scripts will be needed.
Use a different shell. For example dash does not unset $PS1
. You can use such a shell just as the non-interactive shell which will run the scripts
up to xterm .
Give up the strict POSIX compliance and use the bash-standard place for setting
$PS1 : /etc/bash.bashrc or ~/.bashrc .
Give up the strict POSIX compliance and source your own startup script like: bash
--rcfile <(echo "PS1=$PS1save") -i
Start the intermediate shells from startx till xterm as interactive
shells ( bash -i ). Unfortunately this can have some side-effect and I would
not do this.
I am specifically avoiding to set PS1 in .bashrc or
/etc/bash.bashrc (which is executed as well), to retain POSIX shell compatibility.
These do not set or unset PS1 . PS1 is set in /etc/profile.d/user.sh
, which is sourced by /etc/profile . Indeed, this file is only executed
for login shells, however I do export PS1 from /etc/profile.d/user.sh
exactly because I want propagation of my preferred value down the process tree. So
it shouldn't matter which subshells are login and/or interactive ones then, should
it? – amn
Oct 21 '13 at 11:32
It seems that bash removes the PS1 variable. What exactly
do you want to achieve by "POSIX shell compatibility"? Do you want to be able to replace
bash by a different POSIX-compliant shell and retain the same functionality?
Based on my tests bash removes PS1 when it is started as
non-interactive. I think of two simple solutions: 1. start the shell as a login
shell with the -l option (attention for actions in the startup scripts
which should be started only at login) 2. start the intermediate shells as
interactive with the -i option. –
pabouk
Oct 21 '13 at 12:00
I try to follow interfaces and specifications, not implementations - hence POSIX
compatibility. That's important (to me). I already have one login shell - the one
started by /usr/bin/login . I understand that a non-interactive shell
doesn't need prompt, but unsetting a variable is too much - I need the prompt in an
interactive shell (spawned and used by xterm ) later on. What am I doing
wrong? I guess most people set their prompt in .bashrc which is sourced
by bash anyway, and so the prompt survives. I try to avoid .bashrc however.
– amn
Oct 22 '13 at 12:12
The Learning Bash Book mention that a subshell will inherit only environment variabels and
file descriptors , ...etc and that it will not inherit variables that are not exported of
$ var=15
$ (echo $var)
15
$ ./file # this file include the same command echo $var
$
As i know the shell will create two subshells for () case and for ./file, but why in ()
case the subshell identified the var variable although it is not exported and in the ./file
case it did not identify it ?
...
I tried to use strace to figure out how this happens and surprisingly i found that bash
will use the same arguments for the clone system call so this means that the both forked
process in () and ./file should have the same process address space of the parent, so why in
() case the variable is visible to the subshell and the same does not happen for ./file case
although the same arguments is based with clone system call ?
The subshell created using parentheses does not use an execve()
call for the new process, the calling of the script does. At this point the variables
from the parent shell are handled differently: The execve() passes a deliberate
set of variables (the script-calling case) while not calling execve() (the
parentheses case) leaves the complete set of variables intact.
Your probing using strace should have shown exactly that difference; if you
did not see it, I can only assume that you made one of several possible mistakes. I will just
strip down what I did to show the difference, then you can decide for yourself where your
error was.
The solution for this mystery is that subshells inherit everything from the parent shell
including all shell variables because they are simply called with fork or clone so they share
the same memory space with the parent shell , that's why this will work
$ var=15
$ (echo $var)
15
But in the ./file , the subshell will be later followed by exec or execv system call which
will clear all the previous parent variables but we still have the environment variables you
can check this out using strace using -f to monitor the child subshell and you will find that
there is a call to execv
When interacting with your server through a shell session, there are many pieces of
information that your shell compiles to determine its behavior and access to resources. Some of
these settings are contained within configuration settings and others are determined by user
input.
One way that the shell keeps track of all of these settings and details is through an area
it maintains called the environment . The environment is an area that the shell builds every
time that it starts a session that contains variables that define system properties.
In this guide, we will discuss how to interact with the environment and read or set
environmental and shell variables interactively and through configuration files. We will be
using an Ubuntu 12.04 VPS as an example, but these details should be relevant on any Linux
system.
How the Environment and Environmental Variables Work
Every time a shell session spawns, a process takes place to gather and compile information
that should be available to the shell process and its child processes. It obtains the data for
these settings from a variety of different files and settings on the system.
Basically the environment provides a medium through which the shell process can get or set
settings and, in turn, pass these on to its child processes.
The environment is implemented as strings that represent key-value pairs. If multiple values
are passed, they are typically separated by colon (:) characters. Each pair will generally will
look something like this:
KEY
value1
value2:...
If the value contains significant white-space, quotations are used:
KEY
="
value with spaces
"
The keys in these scenarios are variables. They can be one of two types, environmental
variables or shell variables.
Environmental variables are variables that are defined for the current shell and are
inherited by any child shells or processes. Environmental variables are used to pass
information into processes that are spawned from the shell.
Shell variables are variables that are contained exclusively within the shell in which they
were set or defined. They are often used to keep track of ephemeral data, like the current
working directory.
By convention, these types of variables are usually defined using all capital letters. This
helps users distinguish environmental variables within other contexts.
Printing Shell and
Environmental Variables
Each shell session keeps track of its own shell and environmental variables. We can access
these in a few different ways.
We can see a list of all of our environmental variables by using the env or
printenv commands. In their default state, they should function exactly the
same:
This is fairly typical of the output of both printenv and env .
The difference between the two commands is only apparent in their more specific functionality.
For instance, with printenv , you can requests the values of individual
variables:
printenv SHELL
/bin/bash
On the other hand, env let's you modify the environment that programs run in by
passing a set of variable definitions into a command like this:
Since, as we learned above, child processes typically inherit the environmental variables of
the parent process, this gives you the opportunity to override values or add additional
variables for the child.
As you can see from the output of our printenv command, there are quite a few
environmental variables set up through our system files and processes without our input.
These show the environmental variables, but how do we see shell variables?
The set command can be used for this. If we type set without any
additional parameters, we will get a list of all shell variables, environmental variables,
local variables, and shell functions:
This is usually a huge list. You probably want to pipe it into a pager program to deal with
the amount of output easily:
set | less
The amount of additional information that we receive back is a bit overwhelming. We probably
do not need to know all of the bash functions that are defined, for instance.
We can clean up the output by specifying that set should operate in POSIX mode,
which won't print the shell functions. We can execute this in a sub-shell so that it does not
change our current environment:
(set -o posix; set)
This will list all of the environmental and shell variables that are defined.
We can attempt to compare this output with the output of the env or
printenv commands to try to get a list of only shell variables, but this will be
imperfect due to the different ways that these commands output information:
comm -23 <(set -o posix; set | sort) <(env | sort)
This will likely still include a few environmental variables, due to the fact that the
set command outputs quoted values, while the printenv and
env commands do not quote the values of strings.
This should still give you a good idea of the environmental and shell variables that are set
in your session.
These variables are used for all sorts of things. They provide an alternative way of setting
persistent values for the session between processes, without writing changes to a
file.
Common Environmental and Shell Variables
Some environmental and shell variables are very useful and are referenced fairly often.
Here are some common environmental variables that you will come across:
SHELL : This describes the shell that will be interpreting any commands you type in. In
most cases, this will be bash by default, but other values can be set if you prefer other
options.
TERM : This specifies the type of terminal to emulate when running the shell. Different
hardware terminals can be emulated for different operating requirements. You usually won't
need to worry about this though.
USER : The current logged in user.
PWD : The current working directory.
OLDPWD : The previous working directory. This is kept by the shell in order to switch
back to your previous directory by running cd - .
LS_COLORS : This defines color codes that are used to optionally add colored output to
the ls command. This is used to distinguish different file types and provide
more info to the user at a glance.
MAIL : The path to the current user's mailbox.
PATH : A list of directories that the system will check when looking for commands. When a
user types in a command, the system will check directories in this order for the
executable.
LANG : The current language and localization settings, including character encoding.
HOME : The current user's home directory.
: The most recent previously executed command.
In addition to these environmental variables, some shell variables that you'll often see
are:
BASHOPTS : The list of options that were used when bash was executed. This can be useful
for finding out if the shell environment will operate in the way you want it to.
BASH_VERSION : The version of bash being executed, in human-readable form.
BASH_VERSINFO : The version of bash, in machine-readable output.
COLUMNS : The number of columns wide that are being used to draw output on the
screen.
DIRSTACK : The stack of directories that are available with the pushd and
popd commands.
HISTFILESIZE : Number of lines of command history stored to a file.
HISTSIZE : Number of lines of command history allowed in memory.
HOSTNAME : The hostname of the computer at this time.
IFS : The internal field separator to separate input on the command line. By default,
this is a space.
PS1 : The primary command prompt definition. This is used to define what your prompt
looks like when you start a shell session. The PS2 is used to declare secondary
prompts for when a command spans multiple lines.
SHELLOPTS : Shell options that can be set with the set option.
UID : The UID of the current user.
Setting Shell and Environmental Variables
To better understand the difference between shell and environmental variables, and to
introduce the syntax for setting these variables, we will do a small
demonstration.
Creating Shell Variables
We will begin by defining a shell variable within our current session. This is easy to
accomplish; we only need to specify a name and a value. We'll adhere to the convention of
keeping all caps for the variable name, and set it to a simple string.
TEST_VAR='Hello World!'
Here, we've used quotations since the value of our variable contains a space. Furthermore,
we've used single quotes because the exclamation point is a special character in the bash shell
that normally expands to the bash history if it is not escaped or put into single quotes.
We now have a shell variable. This variable is available in our current session, but will
not be passed down to child processes.
We can see this by grepping for our new variable within the set output:
set | grep TEST_VAR
TEST_VAR='Hello World!'
We can verify that this is not an environmental variable by trying the same thing with
printenv :
printenv | grep TEST_VAR
No out should be returned.
Let's take this as an opportunity to demonstrate a way of accessing the value of any shell
or environmental variable.
echo $TEST_VAR
Hello World!
As you can see, reference the value of a variable by preceding it with a $
sign. The shell takes this to mean that it should substitute the value of the variable when it
comes across this.
So now we have a shell variable. It shouldn't be passed on to any child processes. We can
spawn a new bash shell from within our current one to demonstrate:
bash
echo $TEST_VAR
If we type bash to spawn a child shell, and then try to access the contents of
the variable, nothing will be returned. This is what we expected.
Get back to our original shell by typing exit :
exit
Creating Environmental Variables
Now, let's turn our shell variable into an environmental variable. We can do this by
exporting the variable. The command to do so is appropriately named:
export TEST_VAR
This will change our variable into an environmental variable. We can check this by checking
our environmental listing again:
printenv | grep TEST_VAR
TEST_VAR=Hello World!
This time, our variable shows up. Let's try our experiment with our child shell again:
bash
echo $TEST_VAR
Hello World!
Great! Our child shell has received the variable set by its parent. Before we exit this
child shell, let's try to export another variable. We can set environmental variables in a
single step like this:
export NEW_VAR="Testing export"
Test that it's exported as an environmental variable:
printenv | grep NEW_VAR
NEW_VAR=Testing export
Now, let's exit back into our original shell:
exit
Let's see if our new variable is available:
echo $NEW_VAR
Nothing is returned.
This is because environmental variables are only passed to child processes. There isn't a
built-in way of setting environmental variables of the parent shell. This is good in most cases
and prevents programs from affecting the operating environment from which they were called.
The NEW_VAR variable was set as an environmental variable in our child shell.
This variable would be available to itself and any of its child shells and processes. When we
exited back into our main shell, that environment was destroyed.
Demoting and Unsetting
Variables
We still have our TEST_VAR variable defined as an environmental variable. We
can change it back into a shell variable by typing:
export -n TEST_VAR
It is no longer an environmental variable:
printenv | grep TEST_VAR
However, it is still a shell variable:
set | grep TEST_VAR
TEST_VAR='Hello World!'
If we want to completely unset a variable, either shell or environmental, we can do so with
the unset command:
unset TEST_VAR
We can verify that it is no longer set:
echo $TEST_VAR
Nothing is returned because the variable has been unset.
Setting Environmental
Variables at Login
We've already mentioned that many programs use environmental variables to decide the
specifics of how to operate. We do not want to have to set important variables up every time we
start a new shell session, and we have already seen how many variables are already set upon
login, so how do we make and define variables automatically?
This is actually a more complex problem than it initially seems, due to the numerous
configuration files that the bash shell reads depending on how it is started.
The
Difference between Login, Non-Login, Interactive, and Non-Interactive Shell Sessions
The bash shell reads different configuration files depending on how the session is
started.
One distinction between different sessions is whether the shell is being spawned as a
"login" or "non-login" session.
A login shell is a shell session that begins by authenticating the user. If you are signing
into a terminal session or through SSH and authenticate, your shell session will be set as a
"login" shell.
If you start a new shell session from within your authenticated session, like we did by
calling the bash command from the terminal, a non-login shell session is started.
You were were not asked for your authentication details when you started your child shell.
Another distinction that can be made is whether a shell session is interactive, or
non-interactive.
An interactive shell session is a shell session that is attached to a terminal. A
non-interactive shell session is one is not attached to a terminal session.
So each shell session is classified as either login or non-login and interactive or
non-interactive.
A normal session that begins with SSH is usually an interactive login shell. A script run
from the command line is usually run in a non-interactive, non-login shell. A terminal session
can be any combination of these two properties.
Whether a shell session is classified as a login or non-login shell has implications on
which files are read to initialize the shell session.
A session started as a login session will read configuration details from the
/etc/profile file first. It will then look for the first login shell configuration
file in the user's home directory to get user-specific configuration details.
It reads the first file that it can find out of ~/.bash_profile ,
~/.bash_login , and ~/.profile and does not read any further
files.
In contrast, a session defined as a non-login shell will read /etc/bash.bashrc
and then the user-specific ~/.bashrc file to build its environment.
Non-interactive shells read the environmental variable called BASH_ENV and read
the file specified to define the new environment.
Implementing Environmental
Variables
As you can see, there are a variety of different files that we would usually need to look at
for placing our settings.
This provides a lot of flexibility that can help in specific situations where we want
certain settings in a login shell, and other settings in a non-login shell. However, most of
the time we will want the same settings in both situations.
Fortunately, most Linux distributions configure the login configuration files to source the
non-login configuration files. This means that you can define environmental variables that you
want in both inside the non-login configuration files. They will then be read in both
scenarios.
We will usually be setting user-specific environmental variables, and we usually will want
our settings to be available in both login and non-login shells. This means that the place to
define these variables is in the ~/.bashrc file.
Open this file now:
nano ~/.bashrc
This will most likely contain quite a bit of data already. Most of the definitions here are
for setting bash options, which are unrelated to environmental variables. You can set
environmental variables just like you would from the command line:
export VARNAME=value
We can then save and close the file. The next time you start a shell session, your
environmental variable declaration will be read and passed on to the shell environment. You can
force your current session to read the file now by typing:
source ~/.bashrc
If you need to set system-wide variables, you may want to think about adding them to
/etc/profile , /etc/bash.bashrc , or /etc/environment
.
Conclusion
Environmental and shell variables are always present in your shell sessions and can be very
useful. They are an interesting way for a parent process to set configuration details for its
children, and are a way of setting options outside of files.
This has many advantages in specific situations. For instance, some deployment mechanisms
rely on environmental variables to configure authentication information. This is useful because
it does not require keeping these in files that may be seen by outside parties.
There are plenty of other, more mundane, but more common scenarios where you will need to
read or alter the environment of your system. These tools and techniques should give you a good
foundation for making these changes and using them correctly.
I've used a number of different *nix-based systems of the years, and it seems like every flavor
of Bash I use has a different algorithm for deciding which startup scripts to run. For the purposes
of tasks like setting up environment variables and aliases and printing startup messages (e.g.
MOTDs), which startup script is the appropriate place to do these?
What's the difference between putting things in .bashrc , .bash_profile
, and .environment ? I've also seen other files such as .login ,
.bash_login , and .profile ; are these ever relevant? What are the differences
in which ones get run when logging in physically, logging in remotely via ssh, and opening a new
terminal window? Are there any significant differences across platforms (including Mac OS X (and
its Terminal.app) and Cygwin Bash)?
The main difference with shell config files is that some are only read by "login" shells (eg.
when you login from another host, or login at the text console of a local unix machine). these
are the ones called, say, .login or .profile or .zlogin
(depending on which shell you're using).
Then you have config files that are read by "interactive" shells (as in, ones connected to
a terminal (or pseudo-terminal in the case of, say, a terminal emulator running under a windowing
system). these are the ones with names like .bashrc , .tcshrc ,
.zshrc , etc.
bash complicates this in that .bashrc is only read by a shell that's
both interactive and non-login , so you'll find most people end up telling their
.bash_profile to also read .bashrc with something like
[[ -r ~/.bashrc ]] && . ~/.bashrc
Other shells behave differently - eg with zsh , .zshrc is always
read for an interactive shell, whether it's a login one or not.
The manual page for bash explains the circumstances under which each file is read. Yes, behaviour
is generally consistent between machines.
.profile is simply the login script filename originally used by /bin/sh
. bash , being generally backwards-compatible with /bin/sh , will read
.profile if one exists.
Login shells are the ones that are the one you login (so, they are not executed when merely
starting up xterm, for example). There are other ways to login. For example using an X display
manager. Those have other ways to read and export environment variables at login time.
Also read the INVOCATION chapter in the manual. It says "The following paragraphs
describe how bash executes its startup files." , i think that's a spot-on :) It explains
what an "interactive" shell is too.
Bash does not know about .environment . I suspect that's a file of your distribution,
to set environment variables independent of the shell that you drive.
Classically, ~/.profile is used by Bourne Shell, and is probably supported by Bash
as a legacy measure. Again, ~/.login and ~/.cshrc were used by C Shell
- I'm not sure that Bash uses them at all.
The ~/.bash_profile would be used once, at login. The ~/.bashrc script
is read every time a shell is started. This is analogous to /.cshrc for C Shell.
One consequence is that stuff in ~/.bashrc should be as lightweight (minimal)
as possible to reduce the overhead when starting a non-login shell.
I believe the ~/.environment file is a compatibility file for Korn Shell.
I found information about .bashrc and .bash_profile
here to
sum it up:
.bash_profile is executed when you login. Stuff you put in there might be your PATH and
other important environment variables.
.bashrc is used for non login shells. I'm not sure what that means. I know that RedHat executes
it everytime you start another shell (su to this user or simply calling bash again) You might
want to put aliases in there but again I am not sure what that means. I simply ignore it myself.
.profile is the equivalent of .bash_profile for the root. I think the name is changed to
let other shells (csh, sh, tcsh) use it as well. (you don't need one as a user)
There is also .bash_logout wich executes at, yeah good guess...logout. You might want to
stop deamons or even make a little housekeeping . You can also add "clear" there if you want
to clear the screen when you log out.
Also there is a complete follow up on each of the configurations files
here
These are probably even distro.-dependant, not all distros choose to have each configuraton
with them and some have even more. But when they have the same name, they usualy include the same
content.
According to
Josh
Staiger , Mac OS X's Terminal.app actually runs a login shell rather than a non-login shell
by default for each new terminal window, calling .bash_profile instead of .bashrc.
He recommends:
Most of the time you don't want to maintain two separate config files for login and non-login
shells ! when you set a PATH, you want it to apply to both. You can fix this by sourcing .bashrc
from your .bash_profile file, then putting PATH and common settings in .bashrc.
To do this, add the following lines to .bash_profile:
if ~/.bashrc ]; then
source ~/.bashrc
fi
Now when you login to your machine from a console .bashrc will be called.
I have used Debian-family distros which appear to execute .profile , but not
.bash_profile , whereas RHEL derivatives execute .bash_profile before
.profile .
It seems to be a mess when you have to set up environment variables to work in any Linux OS.
I consistently have more than one terminal open. Anywhere from two to ten, doing various bits
and bobs. Now let's say I restart and open up another set of terminals. Some remember certain
things, some forget.
I want a history that:
Remembers everything from every terminal
Is instantly accessible from every terminal (eg if I
ls
in one, switch to
another already-running terminal and then press up,
ls
shows up)
Doesn't forget command if there are spaces at the front of the command.
Anything I can do to make bash work more like that?
# Avoid duplicates
export HISTCONTROL=ignoredups:erasedups
# When the shell exits, append to the history file instead of overwriting it
shopt -s histappend
# After each command, append to the history file and reread it
export PROMPT_COMMAND="${PROMPT_COMMAND:+$PROMPT_COMMAND$'\n'}history -a; history -c; history -r"
export HISTCONTROL=ignoredups:erasedups # no duplicate entries
export HISTSIZE=100000 # big big history
export HISTFILESIZE=100000 # big big history
shopt -s histappend # append to history, don't overwrite it
# Save and reload the history after each command finishes
export PROMPT_COMMAND="history -a; history -c; history -r; $PROMPT_COMMAND"
Tested with bash 3.2.17 on Mac OS X 10.5, bash 4.1.7 on 10.6.
Here is my attempt at Bash session history sharing. This will enable history sharing between
bash sessions in a way that the history counter does not get mixed up and history expansion
like
!number
will work (with some constraints).
Using Bash version 4.1.5 under Ubuntu 10.04 LTS (Lucid Lynx).
HISTSIZE=9000
HISTFILESIZE=$HISTSIZE
HISTCONTROL=ignorespace:ignoredups
_bash_history_sync() {
builtin history -a #1
HISTFILESIZE=$HISTSIZE #2
builtin history -c #3
builtin history -r #4
}
history() { #5
_bash_history_sync
builtin history "$@"
}
PROMPT_COMMAND=_bash_history_sync
Explanation:
Append the just entered line to the
$HISTFILE
(default is
.bash_history
). This will cause
$HISTFILE
to grow by one
line.
Setting the special variable
$HISTFILESIZE
to some value will cause Bash
to truncate
$HISTFILE
to be no longer than
$HISTFILESIZE
lines by
removing the oldest entries.
Clear the history of the running session. This will reduce the history counter by the
amount of
$HISTSIZE
.
Read the contents of
$HISTFILE
and insert them in to the current running
session history. this will raise the history counter by the amount of lines in
$HISTFILE
. Note that the line count of
$HISTFILE
is not
necessarily
$HISTFILESIZE
.
The
history()
function overrides the builtin history to make sure that the
history is synchronised before it is displayed. This is necessary for the history expansion
by number (more about this later).
More explanation:
Step 1 ensures that the command from the current running session gets written to the
global history file.
Step 4 ensures that the commands from the other sessions gets read in to the current
session history.
Because step 4 will raise the history counter, we need to reduce the counter in some
way. This is done in step 3.
In step 3 the history counter is reduced by
$HISTSIZE
. In step 4 the
history counter is raised by the number of lines in
$HISTFILE
. In step 2 we
make sure that the line count of
$HISTFILE
is exactly
$HISTSIZE
(this means that
$HISTFILESIZE
must be the same as
$HISTSIZE
).
About the constraints of the history expansion:
When using history expansion by number, you should always look up the number immediately
before using it. That means no bash prompt display between looking up the number and using
it. That usually means no enter and no ctrl+c.
Generally, once you have more than one Bash session, there is no guarantee whatsoever that
a history expansion by number will retain its value between two Bash prompt displays. Because
when
PROMPT_COMMAND
is executed the history from all other Bash sessions are
integrated in the history of the current session. If any other bash session has a new command
then the history numbers of the current session will be different.
I find this constraint reasonable. I have to look the number up every time anyway because
I can't remember arbitrary history numbers.
Usually I use the history expansion by number like this
$ history | grep something #note number
$ !number
I recommend using the following Bash options.
## reedit a history substitution line if it failed
shopt -s histreedit
## edit a recalled history line before executing
shopt -s histverify
Strange bugs:
Running the history command piped to anything will result that command to be listed in the
history twice. For example:
$ history | head
$ history | tail
$ history | grep foo
$ history | true
$ history | false
All will be listed in the history twice. I have no idea why.
Ideas for
improvements:
Modify the function
_bash_history_sync()
so it does not execute every
time. For example it should not execute after a
CTRL+C
on the prompt. I often
use
CTRL+C
to discard a long command line when I decide that I do not want to
execute that line. Sometimes I have to use
CTRL+C
to stop a Bash completion
script.
Commands from the current session should always be the most recent in the history of
the current session. This will also have the side effect that a given history number keeps
its value for history entries from this session.
I'm not aware of any way using
bash
. But it's one of the most popular features
of
zsh
.
Personally I prefer
zsh
over
bash
so I recommend trying it.
Here's the part of my
.zshrc
that deals with history:
SAVEHIST=10000 # Number of entries
HISTSIZE=10000
HISTFILE=~/.zsh/history # File
setopt APPEND_HISTORY # Don't erase history
setopt EXTENDED_HISTORY # Add additional data to history like timestamp
setopt INC_APPEND_HISTORY # Add immediately
setopt HIST_FIND_NO_DUPS # Don't show duplicates in search
setopt HIST_IGNORE_SPACE # Don't preserve spaces. You may want to turn it off
setopt NO_HIST_BEEP # Don't beep
setopt SHARE_HISTORY # Share history between session/terminals
If the histappend shell option is enabled (see the description of shopt under SHELL
BUILTIN COMMANDS below), the lines are appended to the history file, otherwise the history
file is over-written.
You can edit your BASH prompt to run the "history -a" and "history -r" that Muerr suggested:
savePS1=$PS1
(in case you mess something up, which is almost guaranteed)
PS1=$savePS1`history -a;history -r`
(note that these are back-ticks; they'll run history -a and history -r on every prompt.
Since they don't output any text, your prompt will be unchanged.
Once you've got your PS1 variable set up the way you want, set it permanently it in your
~/.bashrc file.
If you want to go back to your original prompt while testing, do:
PS1=$savePS1
I've done basic testing on this to ensure that it sort of works, but can't speak to any
side-effects from running
history -a;history -r
on every prompt.
The problem is the following: I have two shell windows A and B. In shell window A, I run
sleep 9999
, and (without waiting for the sleep to finish) in shell window B, I
want to be able to see
sleep 9999
in the bash history.
The reason why most other solutions here won't solve this problem is that they are writing
their history changes to the the history file using
PROMPT_COMMAND
or
PS1
, both of which are executing too late, only after the
sleep
9999
command has finished.
Here's an alternative that I use. It's cumbersome but it addresses the issue that @axel_c
mentioned where sometimes you may want to have a separate history instance in each terminal
(one for make, one for monitoring, one for vim, etc).
I keep a separate appended history file that I constantly update. I have the following
mapped to a hotkey:
history | grep -v history >> ~/master_history.txt
This appends all history from the current terminal to a file called master_history.txt in
your home dir.
I also have a separate hotkey to search through the master history file:
cat /home/toby/master_history.txt | grep -i
I use cat | grep because it leaves the cursor at the end to enter my regex. A less ugly
way to do this would be to add a couple of scripts to your path to accomplish these tasks,
but hotkeys work for my purposes. I also periodically will pull history down from other hosts
I've worked on and append that history to my master_history.txt file.
It's always nice to be able to quickly search and find that tricky regex you used or that
weird perl one-liner you came up with 7 months ago.
Right, So finally this annoyed me to find a decent solution:
# Write history after each command
_bash_history_append() {
builtin history -a
}
PROMPT_COMMAND="_bash_history_append; $PROMPT_COMMAND"
What this does is sort of amalgamation of what was said in this thread, except that I
don't understand why would you reload the global history after every command. I very rarely
care about what happens in other terminals, but I always run series of commands, say in one
terminal:
make
ls -lh target/*.foo
scp target/artifact.foo vm:~/
Here is my enhancement to @lesmana's
answer
. The main difference is that
concurrent windows don't share history. This means you can keep working in your windows,
without having context from other windows getting loaded into your current windows.
If you explicitly type 'history', OR if you open a new window then you get the history
from all previous windows.
Also, I use
this strategy
to
archive every command ever typed on my machine.
# Consistent and forever bash history
HISTSIZE=100000
HISTFILESIZE=$HISTSIZE
HISTCONTROL=ignorespace:ignoredups
_bash_history_sync() {
builtin history -a #1
HISTFILESIZE=$HISTSIZE #2
}
_bash_history_sync_and_reload() {
builtin history -a #1
HISTFILESIZE=$HISTSIZE #2
builtin history -c #3
builtin history -r #4
}
history() { #5
_bash_history_sync_and_reload
builtin history "$@"
}
export HISTTIMEFORMAT="%y/%m/%d %H:%M:%S "
PROMPT_COMMAND='history 1 >> ${HOME}/.bash_eternal_history'
PROMPT_COMMAND=_bash_history_sync;$PROMPT_COMMAND
I have written a script for setting a history file per session or task its based off the
following.
# write existing history to the old file
history -a
# set new historyfile
export HISTFILE="$1"
export HISET=$1
# touch the new file to make sure it exists
touch $HISTFILE
# load new history file
history -r $HISTFILE
It doesn't necessary save every history command but it saves the ones that i care about
and its easier to retrieve them then going through every command. My version also lists all
history files and provides the ability to search through them all.
I chose to put history in a file-per-tty, as multiple people can be working on the same
server - separating each session's commands makes it easier to audit.
# Convert /dev/nnn/X or /dev/nnnX to "nnnX"
HISTSUFFIX=`tty | sed 's/\///g;s/^dev//g'`
# History file is now .bash_history_pts0
HISTFILE=".bash_history_$HISTSUFFIX"
HISTTIMEFORMAT="%y-%m-%d %H:%M:%S "
HISTCONTROL=ignoredups:ignorespace
shopt -s histappend
HISTSIZE=1000
HISTFILESIZE=5000
History now looks like:
user@host:~# test 123
user@host:~# test 5451
user@host:~# history
1 15-08-11 10:09:58 test 123
2 15-08-11 10:10:00 test 5451
3 15-08-11 10:10:02 history
With the files looking like:
user@host:~# ls -la .bash*
-rw------- 1 root root 4275 Aug 11 09:42 .bash_history_pts0
-rw------- 1 root root 75 Aug 11 09:49 .bash_history_pts1
-rw-r--r-- 1 root root 3120 Aug 11 10:09 .bashrc
export PROMPT_COMMAND="${PROMPT_COMMAND:+$PROMPT_COMMAND$'\n'}history -a; history -c; history -r"
and
PROMPT_COMMAND="$PROMPT_COMMAND;history -a; history -n"
If you run source ~/.bashrc, the $PROMPT_COMMAND will be like
"history -a; history -c; history -r history -a; history -c; history -r"
and
"history -a; history -n history -a; history -n"
This repetition occurs each time you run 'source ~/.bashrc'. You can check PROMPT_COMMAND
after each time you run 'source ~/.bashrc' by running 'echo $PROMPT_COMMAND'.
You could see some commands are apparently broken: "history -n history -a". But the good
news is that it still works, because other parts still form a valid command sequence (Just
involving some extra cost due to executing some commands repetitively. And not so clean.)
Personally I use the following simple version:
shopt -s histappend
PROMPT_COMMAND="history -a; history -c; history -r"
which has most of the functionalities while no such issue as mentioned above.
Another point to make is: there is really nothing magic . PROMPT_COMMAND is just a plain
bash environment variable. The commands in it get executed before you get bash prompt (the $
sign). For example, your PROMPT_COMMAND is "echo 123", and you run "ls" in your terminal. The
effect is like running "ls; echo 123".
$ PROMPT_COMMAND="echo 123"
output (Just like running 'PROMPT_COMMAND="echo 123"; $PROMPT_COMMAND'):
123
Run the following:
$ echo 3
output:
3
123
"history -a" is used to write the history commands in memory to ~/.bash_history
"history -c" is used to clear the history commands in memory
"history -r" is used to read history commands from ~/.bash_history to memory
Here is the snippet from my .bashrc and short explanations wherever needed:
# The following line ensures that history logs screen commands as well
shopt -s histappend
# This line makes the history file to be rewritten and reread at each bash prompt
PROMPT_COMMAND="$PROMPT_COMMAND;history -a; history -n"
# Have lots of history
HISTSIZE=100000 # remember the last 100000 commands
HISTFILESIZE=100000 # start truncating commands after 100000 lines
HISTCONTROL=ignoreboth # ignoreboth is shorthand for ignorespace and ignoredups
The HISTFILESIZE and HISTSIZE are personal preferences and you can change them as per your
tastes.
##############################################################################
# History Configuration for ZSH
##############################################################################
HISTSIZE=10000 #How many lines of history to keep in memory
HISTFILE=~/.zsh_history #Where to save history to disk
SAVEHIST=10000 #Number of history entries to save to disk
#HISTDUP=erase #Erase duplicates in the history file
setopt appendhistory #Append history to the history file (no overwriting)
setopt sharehistory #Share history across terminals
setopt incappendhistory #Immediately append to the history file, not just when a term is killed
"... ', the pattern removal operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable subscripted with '@' or ' ..."
Following some issues with scp (it did not like the presence of the bash bind command in my
.bashrc
file, apparently), I followed the advice of a clever guy on the Internet
(I just cannot find that post right now) that put at the top of its
.bashrc
file
this:
[[ ${-#*} != ${-} ]] || return
in order to make sure that the bash initialization is NOT executed unless in interactive
session.
Now, that works. However, I am not able to figure how it works. Could you enlighten
me?
According to
this
answer
, the
$-
is the current options set for the shell and I know that the
${}
is the so-called "substring" syntax for expanding variables.
However, I do not understand the
${-#*i}
part. And why
$-#*i
is
not the same as
${-#*i}
.
The word is expanded to produce a pattern just as in filename expansion. If the pattern
matches the beginning of the expanded value of parameter, then the result of the expansion
is the expanded value of parameter with the shortest matching pattern (the '#' case) or the
longest matching pattern (the '##' case) deleted.
If parameter is '@' or '
', the
pattern removal operation is applied to each positional parameter in turn, and the
expansion is the resultant list. If parameter is an array variable subscripted with '@' or
'
', the pattern removal operation is applied to each member of the array in turn, and
the expansion is the resultant list.
So basically what happens in
${-#*i}
is that
*i
is expanded, and
if it matches the beginning of the value of
$-
, then the result of the whole
expansion is
$-
with the shortest matching pattern between
*i
and
$-
deleted.
Example
VAR "baioasd"
echo ${VAR#*i};
outputs
oasd
.
In your case
If shell is interactive,
$-
will contain the letter 'i', so when you strip
the variable
$-
of the pattern
*i
you will get a string that is
different from the original
$-
(
[[ ${-#*i} != ${-} ]]
yelds true).
If shell is not interactive,
$-
does not contain the letter 'i' so the pattern
*i
does not match anything in
$-
and
[[ ${-#*i} != $-
]]
yelds false, and the
return
statement is executed.
To determine within a startup script whether or not Bash is running interactively, test
the value of the '-' special parameter. It contains i when the shell is interactive
Your substitution removes the string up to, and including the
i
and tests if
the substituted version is equal to the original string. They will be different if there is
i
in the
${-}
.
The reason you separate the login and non-login shell is because the .bashrc
file is reloaded every time you start a new copy of Bash.
The .profile file is loaded only when you either log in or use the appropriate
flag to tell Bash to act as a login shell.
Personally,
I put my PATH setup into a .profile file (because I sometimes
use other shells);
I put my Bash aliases and functions into my .bashrc file;
I put this
#!/bin/bash
# CRM .bash_profile Time-stamp: "2008-12-07 19:42"
# echo "Loading ${HOME}/.bash_profile"
source
~/.
profile
# get my PATH setup
source
~/.
bashrc
# get my Bash aliases
in my .bash_profile file.
Oh, and the reason you need to type bash again to get the new alias is that Bash
loads your .bashrc file when it starts but it doesn't reload it unless you tell it
to. You can reload the .bashrc file (and not need a second shell) by typing
source
~/.
bashrc
which loads the .bashrc file as if you had typed the commands directly to Bash.
You only log in once, and that's when ~/.bash_profile or ~/.profile
is read and executed. Since everything you run from your login shell inherits the login shell's
environment, you should put all your environment variables in there. Like LESS
, PATH , MANPATH , LC_* , ... For an example, see:
My .profile
Once you log in, you can run several more shells. Imagine logging in, running X, and in
X starting a few terminals with bash shells. That means your login shell started X, which inherited
your login shell's environment variables, which started your terminals, which started your
non-login bash shells. Your environment variables were passed along in the whole chain, so
your non-login shells don't need to load them anymore. Non-login shells only execute
~/.bashrc , not /.profile or ~/.bash_profile , for this exact
reason, so in there define everything that only applies to bash . That's functions, aliases,
bash-only variables like HISTSIZE (this is not an environment variable, don't export it!) ,
shell options with set and shopt , etc. For an example, see:
My .bashrc
Now, as part of UNIX peculiarity, a login-shell does NOT execute ~/.bashrc
but only ~/.profile or ~/.bash_profile , so you should source that
one manually from the latter. You'll see me do that in my ~/.profile too:
source ~/.bashrc .
When bash is invoked as an interactive login shell, or as a non-interactive shell with the
--login option, it first reads and executes commands from the file /etc/profile
, if that file exists. After reading that file, it looks for ~/.bash_profile ,
~/.bash_login , and ~/.profile , in that order, and reads and executes
commands from the first one that exists and is readable. The --noprofile option
may be used when the shell is started to inhibit this behavior.
When a login shell exits, bash reads and executes commands from the file ~/.bash_logout
, if it exists.
When an interactive shell that is not a login shell is started, bash reads and executes
commands from ~/.bashrc , if that file exists. This may be inhibited by using
the --norc option. The --rcfile file option will force bash to read
and execute commands from file instead of ~/.bashrc .
Thus, if you want to get the same behavior for both login shells and interactive non-login
shells, you should put all of your commands in either .bashrc or .bash_profile
, and then have the other file
source the first one.
I feel stupid: declare not found in bash scripting? I was anxious to get my feet wet, and I'm
only up to my toes before I'm stuck...this seems very very easy but I'm not sure what I've done
wrong. Below is the script and its output. What the heck am I missing?
______________________________________________________
#!/bin/bash
declare -a PROD[0]="computers" PROD[1]="HomeAutomation"
printf "${ PROD[*]}"
_______________________________________________________
products.sh: 6: declare: not found
products.sh: 8: Syntax error: Bad substitution
I ran what you posted (but at the command line, not in a script, though that should make no
significant difference), and got this:
Code:
-bash: ${ PROD[*]}: bad substitution
In other words, I couldn't reproduce your first problem, the "declare: not found" error. Try
the declare command by itself, on the command line.
And I got rid of the "bad substitution" problem when I removed the space which is between the
${ and the PROD on the printf line.
Hope this helps.
blackhole54
The previous poster identified your second problem.
As far as your first problem goes ... I am not a bash guru although I have written a number
of bash scripts. So far I have found no need for declare statements. I suspect that you might
not need it either. But if you do want to use it, the following does work:
Code:
#!/bin/bash
declare -a PROD
PROD[0]="computers"
PROD[1]="HomeAutomation"
printf "${PROD[*]}\n"
EDIT: My original post was based on an older version of bash. When I tried the declare statement
you posted I got an error message, but one that was different from yours. I just tried it on a
newer version of bash, and your declare statement worked fine. So it might depend on the version
of bash you are running. What I posted above runs fine on both versions.
Obviously cut out of a much more complex script that was more meaningful:
#!/bin/bash
function InitializeConfig(){
declare -r -g -A SHCFG_INIT=( [a]=b )
declare -r -g -A SHCFG_INIT=( [c]=d )
echo "This statement never gets executed"
}
set -o xtrace
InitializeConfig
echo "Back from function"
The output looks like this:
ronburk@ubuntu:~/ubucfg$ bash bug.sh
+ InitializeConfig
+ SHCFG_INIT=([a]=b)
+ declare -r -g -A SHCFG_INIT
+ SHCFG_INIT=([c]=d)
+ echo 'Back from function'
Back from function
Bash seems to silently execute a function return upon the second declare statement. Starting to think this really is a new bug, but happy to learn otherwise.
By gum, you're right! Then I get readonly warning on second declare, which is
reasonable, and the function completes. The xtrace output is also interesting;
implies
declare
without single quotes is really treated as two steps.
Ready to become superstitious about always single-quoting the argument to
declare
. Hard to see how popping the function stack can be anything
but a bug, though. –
Ron Burk
Jun 14 '15 at 23:58
I found
this thread in
[email protected] related to
test -v
on an assoc array. In short, bash
implicitly did
test -v SHCFG_INIT[0]
in your script. I'm not sure this
behavior got introduced in 4.3.
You might want to use
declare -p
to workaround this...
if declare p SHCFG_INIT >/dev/null >& ; then
echo "looks like SHCFG_INIT not defined"
fi
====
Well, rats. I think your answer is correct, but also reveals I'm really asking
two separate questions when I thought they were probably the same issue. Since the
title better reflects what turns out to be the "other" question, I'll leave this up
for a while and see if anybody knows what's up with the mysterious implicit
function return... Thanks! –
Ron Burk
Jun 14 '15 at 17:01
Edited question to focus on the remaining issue. Thanks again for the answer on
the "-v" issue with associative arrays. –
Ron Burk
Jun 14 '15 at 17:55
Accepting this answer. Complete answer is here plus your comments above plus
(IMHO) there's a bug in this version of bash (can't see how there can be any excuse
for popping the function stack without warning). Thanks for your excellent research
on this! –
Ron Burk
Jun 21 '15 at 19:31
The
declare
or
typeset
builtins
, which are exact
synonyms, permit modifying the properties of variables. This is a very weak form of the
typing
[1]
available in certain
programming languages. The
declare
command is specific to version 2 or later of Bash.
The
typeset
command also works in ksh scripts.
declare/typeset options
-r
readonly
(
declare -r var1
works the same as
readonly var1
)
This is the rough equivalent of the
C
const
type qualifier. An attempt to
change the value of a
readonly
variable fails with an error message.
declare -i number
# The script will treat subsequent occurrences of "number" as an integer.
number=3
echo "Number = $number" # Number = 3
number=three
echo "Number = $number" # Number = 0
# Tries to evaluate the string "three" as an integer.
Certain arithmetic operations are permitted for declared integer variables without the
need for
expr
or
let
.
n=6/3
echo "n = $n" # n = 6/3
declare -i n
n=6/3
echo "n = $n" # n = 2
-a
array
declare -a indices
The variable
indices
will be treated as an
array
.
-f
function(s)
declare -f
A
declare -f
line with no arguments in a script causes a listing of all
the
functions
previously
defined in that script.
declare -f function_name
A
declare -f function_name
in a script lists just the function
named.
This declares a variable as available for exporting outside the environment of the
script itself.
-x var=$value
declare -x var3=373
The
declare
command permits assigning a value to a variable in the same statement
as setting its properties.
Example 9-10. Using
declare
to type variables
#!/bin/bash
func1 ()
{
echo This is a function.
}
declare -f # Lists the function above.
echo
declare -i var1 # var1 is an integer.
var1=2367
echo "var1 declared as $var1"
var1=var1+1 # Integer declaration eliminates the need for 'let'.
echo "var1 incremented by 1 is $var1."
# Attempt to change variable declared as integer.
echo "Attempting to change var1 to floating point value, 2367.1."
var1=2367.1 # Results in error message, with no change to variable.
echo "var1 is still $var1"
echo
declare -r var2=13.36 # 'declare' permits setting a variable property
#+ and simultaneously assigning it a value.
echo "var2 declared as $var2" # Attempt to change readonly variable.
var2=13.37 # Generates error message, and exit from script.
echo "var2 is still $var2" # This line will not execute.
exit 0 # Script will not exit here.
Using the
declare
builtin restricts the
scope
of a variable.
foo ()
{
FOO="bar"
}
bar ()
{
foo
echo $FOO
}
bar # Prints bar.
However . . .
foo (){
declare FOO="bar"
}
bar ()
{
foo
echo $FOO
}
bar # Prints nothing.
# Thank you, Michael Iatrou, for pointing this out.
9.2.1. Another use for
declare
The
declare
command can be helpful in identifying variables,
environmental
or otherwise. This
can be especially useful with
arrays
.
In this context,
typing
a variable means to classify it and restrict its
properties. For example, a variable
declared
or
typed
as an integer is no
longer available for
string operations
.
Intro
The day will come when you want to give arguments to your scripts. These
arguments are known as positional parameters . Some relevant special parameters are described
below:
Parameter(s)
Description
$0
the first positional parameter, equivalent to
argv[0]
in C, see
the
first argument
$FUNCNAME
the function name (
attention
: inside a function,
$0
is still
the
$0
of the shell, not the function name)
all positional parameters except
$0
, see
mass usage
$@
all positional parameters except
$0
, see
mass usage
$#
the number of arguments, not counting
$0
These positional parameters reflect exactly what was given to the script when it was
called.
Option-switch parsing (e.g.
-h
for displaying help) is not performed at this
point.
See also
the
dictionary entry for "parameter"
.
The first argument
The very first argument you
can access is referenced as
$0
. It is usually set to the script's name exactly as
called, and it's set on shell initialization:
Testscript
- it just echos
$0
:
#!/bin/bash
echo "$0"
You see,
$0
is always set to the name the script is called with (
$
is the prompt ):
> ./testscript
./testscript
> /usr/bin/testscript
/usr/bin/testscript
However, this isn't true for login shells:
> echo "$0"
-bash
In other terms,
$0
is not a positional parameter, it's a special parameter
independent from the positional parameter list. It can be set to anything. In the ideal case
it's the pathname of the script, but since this gets set on invocation, the invoking program
can easily influence it (the
login
program does that for login shells, by
prefixing a dash, for example).
Inside a function,
$0
still behaves as described above. To get the function
name, use
$FUNCNAME
.
Shifting
The builtin command
shift
is
used to change the positional parameter values:
$1
will be discarded
$2
will become
$1
$3
will become
$2
in general:
$N
will become
$N-1
The command can take a number as argument: Number of positions to shift. e.g.
shift
4
shifts
$5
to
$1
.
Using them
Enough theory, you want
to access your script-arguments. Well, here we go.
One by one
One way is to access
specific parameters:
While useful in another situation, this way is lacks flexibility. The maximum number of
arguments is a fixedvalue - which is a bad idea if you write a script that takes many filenames
as arguments.
⇒ forget that one
Loops
There are several ways to loop through the positional
parameters.
You can code a
C-style for-loop
using
$#
as
the end value. On every iteration, the
shift
-command is used to shift the
argument list:
numargs=$#
for ((i=1 ; i <= numargs ; i++))
do
echo "$1"
shift
done
Not very stylish, but usable. The
numargs
variable is used to store the initial
value of
$#
because the shift command will change it as the script runs.
Another way to iterate one argument at a time is the
for
loop without a given
wordlist. The loop uses the positional parameters as a wordlist:
for arg
do
echo "$arg"
done
Advantage:
The positional parameters will be preserved
The next method is similar to the first example (the
for
loop), but it doesn't
test for reaching
$#
. It shifts and checks if
$1
still expands to
something, using the
test command
:
while [ "$1" ]
do
echo "$1"
shift
done
Looks nice, but has the disadvantage of stopping when
$1
is empty
(null-string). Let's modify it to run as long as
$1
is defined (but may be null),
using
parameter expansion for an
alternate value
:
while [ "${1+defined}" ]; do
echo "$1"
shift
done
Getopts
There is a
small tutorial dedicated to
''getopts''
(
under construction
).
Mass usage
All Positional
Parameters
Sometimes it's necessary to just "relay" or "pass" given arguments to another
program. It's very inefficient to do that in one of these loops, as you will destroy integrity,
most likely (spaces!).
The shell developers created
$*
and
$@
for this purpose.
As overview:
Syntax
Effective result
$*
$1 $2 $3 ${N}
$@
$1 $2 $3 ${N}
"$*"
"$1c$2c$3c c${N}"
"$@"
"$1" "$2" "$3" "${N}"
Without being quoted (double quotes), both have the same effect: All positional parameters
from
$1
to the last one used are expanded without any special handling.
When the
$*
special parameter is double quoted, it expands to the equivalent
of:
"$1c$2c$3c$4c ..$N"
, where 'c' is the first character of
IFS
.
But when the
$@
special parameter is used inside double quotes, it expands to
the equivanent of
"$1" "$2" "$3" "$4" .. "$N"
which reflects all positional parameters as they were set initially and passed to the script
or function. If you want to re-use your positional parameters to call another program (for
example in a wrapper-script), then this is the choice for you, use double quoted
"$@"
.
Well, let's just say: You almost always want a quoted
"$@"
!
Range Of
Positional Parameters
Another way to mass expand the positional parameters is similar to
what is possible for a range of characters using
substring expansion
on normal
parameters and the mass expansion range of
arrays
.
${@:START:COUNT}
${*:START:COUNT}
"${@:START:COUNT}"
"${*:START:COUNT}"
The rules for using
@
or
*
and quoting are the same as above. This
will expand
COUNT
number of positional parameters beginning at
START
.
COUNT
can be omitted (
${@:START}
), in which case, all positional
parameters beginning at
START
are expanded.
If
START
is negative, the positional parameters are numbered in reverse
starting with the last one.
COUNT
may not be negative, i.e. the element count may not be decremented.
Example:
START at the last positional parameter:
echo "${@: -1}"
Attention
: As of Bash 4, a
START
of
0
includes the
special parameter
$0
, i.e. the shell name or whatever $0 is set to, when the
positional parameters are in use. A
START
of
1
begins at
$1
. In Bash 3 and older, both
0
and
1
began at
$1
.
Setting Positional Parameters
Setting positional parameters with
command line arguments, is not the only way to set them. The
builtin command, set
may be used to
"artificially" change the positional parameters from inside the script or function:
set "This is" my new "set of" positional parameters
# RESULTS IN
# $1: This is
# $2: my
# $3: new
# $4: set of
# $5: positional
# $6: parameters
It's wise to signal "end of options" when setting positional parameters this way. If not,
the dashes might be interpreted as an option switch by
set
itself:
# both ways work, but behave differently. See the article about the set command!
set -- ...
set - ...
Alternately this will also preserve any verbose (-v) or tracing (-x) flags, which may
otherwise be reset by
set
set -$- ...
Production examples
Using a while loop
To make your program accept options as
standard command syntax:
COMMAND [options] <params>
# Like 'cat -A file.txt'
See simple option parsing code below. It's not that flexible. It doesn't auto-interpret
combined options (-fu USER) but it works and is a good rudimentary way to parse your
arguments.
#!/bin/sh
# Keeping options in alphabetical order makes it easy to add more.
while :
do
case "$1" in
-f | --file)
file="$2" # You may want to check validity of $2
shift 2
;;
-h | --help)
display_help # Call your function
# no shifting needed here, we're done.
exit 0
;;
-u | --user)
username="$2" # You may want to check validity of $2
shift 2
;;
-v | --verbose)
# It's better to assign a string, than a number like "verbose=1"
# because if you're debugging the script with "bash -x" code like this:
#
# if [ "$verbose" ] ...
#
# You will see:
#
# if [ "verbose" ] ...
#
# Instead of cryptic
#
# if [ "1" ] ...
#
verbose="verbose"
shift
;;
--) # End of all options
shift
break;
-*)
echo "Error: Unknown option: $1" >&2
exit 1
;;
*) # No more options
break
;;
esac
done
# End of file
Filter unwanted options with a wrapper script
This simple wrapper enables filtering
unwanted options (here:
-a
and
–all
for
ls
) out
of the command line. It reads the positional parameters and builds a filtered array consisting
of them, then calls
ls
with the new option set. It also respects the
–
as "end of options" for
ls
and doesn't change anything after
it:
#!/bin/bash
# simple ls(1) wrapper that doesn't allow the -a option
options=() # the buffer array for the parameters
eoo=0 # end of options reached
while [[ $1 ]]
do
if ! ((eoo)); then
case "$1" in
-a)
shift
;;
--all)
shift
;;
-[^-]*a*|-a?*)
options+=("${1//a}")
shift
;;
--)
eoo=1
options+=("$1")
shift
;;
*)
options+=("$1")
shift
;;
esac
else
options+=("$1")
# Another (worse) way of doing the same thing:
# options=("${options[@]}" "$1")
shift
fi
done
/bin/ls "${options[@]}"
The shell-developers invented $* and $@ for this purpose.
Without being quoted (double-quoted), both have the same effect: All positional parameters
from $1 to the last used one >are expanded, separated by the first character of IFS
(represented by "c" here, but usually a space):
$1c$2c$3c$4c........$N
Without double quotes, $* and $@ are expanding the positional parameters separated by only
space, not by IFS.
Script execution Your perfect Bash script executes with syntax errors If you write Bash scripts with Bash specific
syntax and features, run them with Bash , and run them with Bash in native mode .
Wrong
no shebang
the interpreter used depends on the OS implementation and current shell
can be run by calling bash with the script name as an argument, e.g. bash myscript
#!/bin/sh shebang
depends on what /bin/sh actually is, for a Bash it means compatiblity mode, not native mode
Your script named "test" doesn't execute Give it another name. The executable test already exists.
In Bash it's a builtin. With other shells, it might be an executable file. Either way, it's bad name choice!
Workaround: You can call it using the pathname:
/home/user/bin/test
Globbing Brace expansion is not globbing The following command line is not related to globbing (filename expansion):
# YOU EXPECT
# -i1.vob -i2.vob -i3.vob ....
echo -i{*.vob,}
# YOU GET
# -i*.vob -i
Why? The brace expansion is simple text substitution. All possible text formed by the prefix, the postfix and the braces themselves
are generated. In the example, these are only two: -i*.vob and -i . The filename expansion happens after
that, so there is a chance that -i*.vob is expanded to a filename - if you have files like -ihello.vob
. But it definitely doesn't do what you expected.
Variables Setting variables The Dollar-Sign There is no $ (dollar-sign) when you reference the
name of a variable! Bash is not PHP!
# THIS IS WRONG!
$myvar="Hello world!"
A variable name preceeded with a dollar-sign always means that the variable gets expanded . In the example above, it might expand
to nothing (because it wasn't set), effectively resulting in
="Hello world!"
which definitely is wrong !
When you need the name of a variable, you write only the name , for example
(as shown above) to set variables: picture=/usr/share/images/foo.png
to name variables to be used by the read builtin command: read picture
to name variables to be unset: unset picture
When you need the content of a variable, you prefix its name with a dollar-sign , like
echo "The used picture is: $picture"
Whitespace Putting spaces on either or both sides of the equal-sign ( = ) when assigning a value to a variable
will fail.
# INCORRECT 1
example = Hello
# INCORRECT 2
example= Hello
# INCORRECT 3
example =Hello
The only valid form is no spaces between the variable name and assigned value
Expanding (using) variables A typical beginner's trap is quoting.
As noted above, when you want to expand a variable i.e. "get the content", the variable name needs to be prefixed with a dollar-sign.
But, since Bash knows various ways to quote and does word-splitting, the result isn't always the same.
Let's define an example variable containing text with spaces:
example="Hello world"
Used form
result
number of words
$example
Hello world
2
"$example"
Hello world
1
\$example
$example
1
'$example'
$example
1
If you use parameter expansion, you must use the name ( PATH ) of the referenced variables/parameters. i.e. not (
$PATH ):
# WRONG!
echo "The first character of PATH is ${$PATH:0:1}"
# CORRECT
echo "The first character of PATH is ${PATH:0:1}"
Note that if you are using variables in arithmetic expressions
, then the bare name is allowed:
((a=$a+7)) # Add 7 to a
((a = a + 7)) # Add 7 to a. Identical to the previous command.
((a += 7)) # Add 7 to a. Identical to the previous command.
a=$((a+7)) # POSIX-compatible version of previous code.
Exporting Exporting a variable means to give newly created (child-)processes a copy of that variable. not copy a variable
created in a child process to the parent process. The following example does not work, since the variable hello is set
in a child process (the process you execute to start that script ./script.sh ):
Exporting is one-way. The direction is parent process to child process, not the reverse. The above example will work, when you
don't execute the script, but include ("source") it:
Exit codes Reacting to exit codes If you just want to react to an exit code, regardless of its specific value, you
don't need to use $? in a test command like this:
grep
^root:
etc
passwd
>/
dev
null
>&
if
$?
-neq
then
echo
"root was not found - check the pub at the corner"
fi
This can be simplified to:
if
grep
^root:
etc
passwd
>/
dev
null
>&
then
echo
"root was not found - check the pub at the corner"
fi
Or, simpler yet:
grep
^root:
etc
passwd
>/
dev
null
>&
||
echo
"root was not found - check the pub at the corner"
If you need the specific value of $? , there's no other choice. But if you need only a "true/false" exit indication,
there's no need for $? .
Output vs. Return Value It's important to remember the different ways to run a child command, and whether you want the output,
the return value, or neither.
When you want to run a command (or a pipeline) and save (or print) the output , whether as a string or an array, you use Bash's
$(command) syntax:
$(ls -l /tmp)
newvariable=$(printf "foo")
When you want to use the return value of a command, just use the command, or add ( ) to run a command or pipeline in a subshell:
if grep someuser /etc/passwd ; then
# do something
fi
if ( w | grep someuser | grep sqlplus ) ; then
# someuser is logged in and running sqlplus
fi
Make sure you're using the form you intended:
# WRONG!
if $(grep ERROR /var/log/messages) ; then
# send alerts
fi
Purpose An array is a parameter that holds mappings from keys to values. Arrays are used
to store a collection of parameters into a parameter. Arrays (in any programming language) are a
useful and common composite data structure, and one of the most important scripting features in Bash
and other shells.
Here is an abstract representation of an array named NAMES . The indexes go from
0 to 3.
NAMES
0: Peter
1: Anna
2: Greg
3: Jan
Instead of using 4 separate variables, multiple related variables are grouped grouped together
into elements of the array, accessible by their key . If you want the second name,
ask for index 1 of the array NAMES . Indexing Bash supports two different types
of ksh-like one-dimensional arrays. Multidimensional arrays are not implemented .
Indexed arrays use positive integer numbers as keys. Indexed arrays are always sparse
, meaning indexes are not necessarily contiguous. All syntax used for both assigning and dereferencing
indexed arrays is an
arithmetic evaluation context (see
Referencing
). As in C and many other languages, the numerical array indexes start at 0 (zero). Indexed arrays
are the most common, useful, and portable type. Indexed arrays were first introduced to Bourne-like
shells by ksh88. Similar, partially compatible syntax was inherited by many derivatives including
Bash. Indexed arrays always carry the -a attribute.
Associative arrays (sometimes known as a "hash" or "dict") use arbitrary nonempty
strings as keys. In other words, associative arrays allow you to look up a value from a table
based upon its corresponding string label. Associative arrays are always unordered , they merely
associate key-value pairs. If you retrieve multiple values from the array at once, you
can't count on them coming out in the same order you put them in. Associative arrays always carry
the -A attribute, and unlike indexed arrays, Bash requires that they always be declared
explicitly (as indexed arrays are the default, see
declaration
). Associative arrays were first introduced in ksh93, and similar mechanisms were later adopted
by Zsh and Bash version 4. These three are currently the only POSIX-compatible shells with any
associative array support.
SyntaxReferencing To accommodate referring to array variables and their individual
elements, Bash extends the parameter naming scheme with a subscript suffix. Any valid ordinary scalar
parameter name is also a valid array name: [[:alpha:]_][[:alnum:]_]* . The parameter
name may be followed by an optional subscript enclosed in square brackets to refer to a member of
the array.
The overall syntax is arrname[subscript] - where for indexed arrays, subscript
is any valid arithmetic expression, and for associative arrays, any nonempty string. Subscripts are
first processed for parameter and arithmetic expansions, and command and process substitutions. When
used within parameter expansions or as an argument to the
unset builtin,
the special subscripts * and @ are also accepted which act upon arrays
analogously to the way the @ and * special parameters act upon the positional
parameters. In parsing the subscript, bash ignores any text that follows the closing bracket up to
the end of the parameter name.
With few exceptions, names of this form may be used anywhere ordinary parameter names are valid,
such as within arithmetic
expressions , parameter expansions
, and as arguments to builtins that accept parameter names. An array is a Bash parameter
that has been given the -a (for indexed) or -A (for associative) attributes
. However, any regular (non-special or positional) parameter may be validly referenced using a subscript,
because in most contexts, referring to the zeroth element of an array is synonymous with referring
to the array name without a subscript.
# "x" is an ordinary non-array parameter.
$ x=hi; printf '%s ' "$x" "${x[0]}"; echo "${_[0]}"
hi hi hi
The only exceptions to this rule are in a few cases where the array variable's name refers to
the array as a whole. This is the case for the unset builtin (see
destruction
) and when declaring an array without assigning any values (see
declaration
). Declaration The following explicitly give variables array attributes, making them arrays:
Syntax
Description
ARRAY=()
Declares an indexed array ARRAY and initializes it to be empty. This can also
be used to empty an existing array.
ARRAY[0]=
Generally sets the first element of an indexed array. If no array ARRAY existed
before, it is created.
declare -a ARRAY
Declares an indexed array ARRAY . An existing array is not initialized.
declare -A ARRAY
Declares an associative array ARRAY . This is the one and only way to create
associative arrays.
Storing values Storing values in arrays is quite as simple as storing values in normal variables.
Syntax
Description
ARRAY[N]=VALUE
Sets the element N of the indexed array ARRAY to VALUE
. N can be any valid
arithmetic expression
ARRAY[STRING]=VALUE
Sets the element indexed by STRING of the associative array ARRAY
.
ARRAY=VALUE
As above. If no index is given, as a default the zeroth element is set to VALUE
. Careful, this is even true of associative arrays - there is no error if no key is specified,
and the value is assigned to string index "0".
ARRAY=(E1 E2 )
Compound array assignment - sets the whole array ARRAY to the given list of
elements indexed sequentially starting at zero. The array is unset before assignment unless
the += operator is used. When the list is empty ( ARRAY=() ), the array will be
set to an empty array. This method obviously does not use explicit indexes. An associative
array can not be set like that! Clearing an associative array using ARRAY=() works.
ARRAY=([X]=E1 [Y]=E2 )
Compound assignment for indexed arrays with index-value pairs declared individually (here
for example X and Y ). X and Y are arithmetic expressions. This syntax
can be combined with the above - elements declared without an explicitly specified index are
assigned sequentially starting at either the last element with an explicit index, or zero.
ARRAY=([S1]=E1 [S2]=E2 )
Individual mass-setting for associative arrays . The named indexes (here: S1
and S2 ) are strings.
Expands to the value of the index N in the indexed array ARRAY
. If N is a negative number, it's treated as the offset from the maximum assigned
index (can't be used for assignment) - 1
${ARRAY[S]}
Expands to the value of the index S in the associative array ARRAY
.
Similar to
mass-expanding
positional parameters , this expands to all elements. If unquoted, both subscripts
* and @ expand to the same result, if quoted, @ expands
to all elements individually quoted, * expands to all elements quoted as a whole.
Similar to what this syntax does for the characters of a single string when doing
substring
expansion , this expands to M elements starting with element N
. This way you can mass-expand individual indexes. The rules for quoting and the subscripts
* and @ are the same as above for the other mass-expansions.
For clarification: When you use the subscripts @ or * for mass-expanding,
then the behaviour is exactly what it is for $@ and $* when
mass-expanding
the positional parameters . You should read this article to understand what's going on. Metadata
Syntax
Description
${#ARRAY[N]}
Expands to the length of an individual array member at index N ( stringlength
${#ARRAY[STRING]}
Expands to the length of an individual associative array member at index STRING
( stringlength )
${#ARRAY[@]} ${#ARRAY[*]}
Expands to the number of elements in ARRAY
${!ARRAY[@]} ${!ARRAY[*]}
Expands to the indexes in ARRAY since BASH 3.0
Destruction The
unset builtin command
is used to destroy (unset) arrays or individual elements of arrays.
Example: You are in a directory with a file named x1 , and you want to destroy an
array element x[1] , with
unset x[1]
then pathname expansion will expand to the filename x1 and break your processing!
Even worse, if nullglob is set, your array/index will disappear.
To avoid this, always quote the array name and index:
unset -v 'x[1]'
This applies generally to all commands which take variable names as arguments. Single quotes preferred.
UsageNumerical Index Numerical indexed arrays are easy to understand and easy to
use. The Purpose
and Indexing chapters
above more or less explain all the needed background theory.
Now, some examples and comments for you.
Let's say we have an array sentence which is initialized as follows:
sentence=(Be liberal in what you accept, and conservative in what you send)
Since no special code is there to prevent word splitting (no quotes), every word there will be
assigned to an individual array element. When you count the words you see, you should get 12. Now
let's see if Bash has the same opinion:
$ echo ${#sentence[@]}
12
Yes, 12. Fine. You can take this number to walk through the array. Just subtract 1 from the number
of elements, and start your walk at 0 (zero)
((n_elements=${#sentence[@]}, max_index=n_elements - 1))
for ((i = 0; i <= max_index; i++)); do
echo "Element $i: '${sentence[i]}'"
done
You always have to remember that, it seems newbies have problems sometimes. Please understand
that numerical array indexing begins at 0 (zero)
The method above, walking through an array by just knowing its number of elements, only works
for arrays where all elements are set, of course. If one element in the middle is removed, then the
calculation is nonsense, because the number of elements doesn't correspond to the highest used index
anymore (we call them " sparse arrays "). Associative (Bash 4) Associative arrays
(or hash tables ) are not much more complicated than numerical indexed arrays. The numerical
index value (in Bash a number starting at zero) just is replaced with an arbitrary string:
# declare -A, introduced with Bash 4 to declare an associative array
declare -A sentence
sentence[Begin]='Be liberal in what'
sentence[Middle]='you accept, and conservative'
sentence[End]='in what you send'
sentence['Very end']=...
Beware: don't rely on the fact that the elements are ordered in memory like they were
declared, it could look like this:
# output from 'set' command
sentence=([End]="in what you send" [Middle]="you accept, and conservative " [Begin]="Be liberal in what " ["Very end"]="...")
This effectively means, you can get the data back with "${sentence[@]}" , of course
(just like with numerical indexing), but you can't rely on a specific order. If you want to store
ordered data, or re-order data, go with numerical indexes. For associative arrays, you usually query
known index values:
for element in Begin Middle End "Very end"; do
printf "%s" "${sentence[$element]}"
done
printf "\n"
A nice code example: Checking for duplicate files using an associative array indexed with the
SHA sum of the files:
# Thanks to Tramp in #bash for the idea and the code
unset flist; declare -A flist;
while read -r sum fname; do
if [[ ${flist[$sum]} ]]; then
printf 'rm -- "%s" # Same as >%s<\n' "$fname" "${flist[$sum]}"
else
flist[$sum]="$fname"
fi
done < <(find . -type f -exec sha256sum {} +) >rmdups
Integer arrays Any type attributes applied to an array apply to all elements of the array.
If the integer attribute is set for either indexed or associative arrays, then values are considered
as arithmetic for both compound and ordinary assignment, and the += operator is modified in the same
way as for ordinary integer variables.
a[0] is assigned to the result of 2+4 . a[1] gets the result
of 2+2 . The last index in the first assignment is the result of a[2] ,
which has already been assigned as 4 , and its value is also given a[2]
.
This shows that even though any existing arrays named a in the current scope have
already been unset by using = instead of += to the compound assignment,
arithmetic variables within keys can self-reference any elements already assigned within the same
compound-assignment. With integer arrays this also applies to expressions to the right of the
= . (See
evaluation
order , the right side of an arithmetic assignment is typically evaluated first in Bash.)
The second compound assignment argument to declare uses += , so it appends after
the last element of the existing array rather than deleting it and creating a new array, so
a[5] gets 42 .
Lastly, the element whose index is the value of a[4] ( 4 ), gets
3 added to its existing value, making a[4] == 7 . Note that
having the integer attribute set this time causes += to add, rather than append a string, as it would
for a non-integer array.
The single quotes force the assignments to be evaluated in the environment of declare
. This is important because attributes are only applied to the assignment after assignment arguments
are processed. Without them the += compound assignment would have been invalid, and
strings would have been inserted into the integer array without evaluating the arithmetic. A special-case
of this is shown in the next section.
eval , but there are differences.) 'Todo: ' Discuss this in detail.
Indirection Arrays can be expanded indirectly using the indirect parameter expansion syntax.
Parameters whose values are of the form: name[index] , name[@] , or
name[*] when expanded indirectly produce the expected results. This is mainly useful
for passing arrays (especially multiple arrays) by name to a function.
This example is an "isSubset"-like predicate which returns true if all key-value pairs of the
array given as the first argument to isSubset correspond to a key-value of the array given as the
second argument. It demonstrates both indirect array expansion and indirect key-passing without eval
using the aforementioned special compound assignment expansion.
isSubset() {
local -a 'xkeys=("${!'"$1"'[@]}")' 'ykeys=("${!'"$2"'[@]}")'
set -- "${@/%/[key]}"
(( ${#xkeys[@]} <= ${#ykeys[@]} )) || return 1
local key
for key in "${xkeys[@]}"; do
[[ ${!2+_} && ${!1} == ${!2} ]] || return 1
done
}
main() {
# "a" is a subset of "b"
local -a 'a=({0..5})' 'b=({0..10})'
isSubset a b
echo $? # true
# "a" contains a key not in "b"
local -a 'a=([5]=5 {6..11})' 'b=({0..10})'
isSubset a b
echo $? # false
# "a" contains an element whose value != the corresponding member of "b"
local -a 'a=([5]=5 6 8 9 10)' 'b=({0..10})'
isSubset a b
echo $? # false
}
main
This script is one way of implementing a crude multidimensional associative array by storing array
definitions in an array and referencing them through indirection. The script takes two keys and dynamically
calls a function whose name is resolved from the array.
callFuncs() {
# Set up indirect references as positional parameters to minimize local name collisions.
set -- "${@:1:3}" ${2+'a["$1"]' "$1"'["$2"]'}
# The only way to test for set but null parameters is unfortunately to test each individually.
local x
for x; do
[[ $x ]] || return 0
done
local -A a=(
[foo]='([r]=f [s]=g [t]=h)'
[bar]='([u]=i [v]=j [w]=k)'
[baz]='([x]=l [y]=m [z]=n)'
) ${4+${a["$1"]+"${1}=${!3}"}} # For example, if "$1" is "bar" then define a new array: bar=([u]=i [v]=j [w]=k)
${4+${a["$1"]+"${!4-:}"}} # Now just lookup the new array. for inputs: "bar" "v", the function named "j" will be called, which prints "j" to stdout.
}
main() {
# Define functions named {f..n} which just print their own names.
local fun='() { echo "$FUNCNAME"; }' x
for x in {f..n}; do
eval "${x}${fun}"
done
callFuncs "$@"
}
main "$@"
Bugs and Portability Considerations
Arrays are not specified by POSIX. One-dimensional indexed arrays are supported using similar
syntax and semantics by most Korn-like shells.
Associative arrays are supported via typeset -A in Bash 4, Zsh, and Ksh93.
In Ksh93, arrays whose types are not given explicitly are not necessarily indexed. Arrays
defined using compound assignments which specify subscripts are associative by default. In Bash,
associative arrays can only be created by explicitly declaring them as associative, otherwise
they are always indexed. In addition, ksh93 has several other compound structures whose types
can be determined by the compound assignment syntax used to create them.
In Ksh93, using the = compound assignment operator unsets the array, including
any attributes that have been set on the array prior to assignment. In order to preserve attributes,
you must use the += operator. However, declaring an associative array, then attempting
an a=( ) style compound assignment without specifying indexes is an error. I can't
explain this inconsistency.
$ ksh -c 'function f { typeset -a a; a=([0]=foo [1]=bar); typeset -p a; }; f' # Attribute is lost, and since subscripts are given, we default to associative.
typeset -A a=([0]=foo [1]=bar)
$ ksh -c 'function f { typeset -a a; a+=([0]=foo [1]=bar); typeset -p a; }; f' # Now using += gives us the expected results.
typeset -a a=(foo bar)
$ ksh -c 'function f { typeset -A a; a=(foo bar); typeset -p a; }; f' # On top of that, the reverse does NOT unset the attribute. No idea why.
ksh: f: line 1: cannot append index array to associative array a
Only Bash and mksh support compound assignment with mixed explicit subscripts and automatically
incrementing subscripts. In ksh93, in order to specify individual subscripts within a compound
assignment, all subscripts must be given (or none). Zsh doesn't support specifying individual
subscripts at all.
Appending to a compound assignment is a fairly portable way to append elements after the last
index of an array. In Bash, this also sets append mode for all individual assignments within the
compound assignment, such that if a lower subscript is specified, subsequent elements will be
appended to previous values. In ksh93, it causes subscripts to be ignored, forcing appending everything
after the last element. (Appending has different meaning due to support for multi-dimensional
arrays and nested compound datastructures.)
$ ksh -c 'function f { typeset -a a; a+=(foo bar baz); a+=([3]=blah [0]=bork [1]=blarg [2]=zooj); typeset -p a; }; f' # ksh93 forces appending to the array, disregarding subscripts
typeset -a a=(foo bar baz '[3]=blah' '[0]=bork' '[1]=blarg' '[2]=zooj')
$ bash -c 'function f { typeset -a a; a+=(foo bar baz); a+=(blah [0]=bork blarg zooj); typeset -p a; }; f' # Bash applies += to every individual subscript.
declare -a a='([0]="foobork" [1]="barblarg" [2]="bazzooj" [3]="blah")'
$ mksh -c 'function f { typeset -a a; a+=(foo bar baz); a+=(blah [0]=bork blarg zooj); typeset -p a; }; f' # Mksh does like Bash, but clobbers previous values rather than appending.
set -A a
typeset a[0]=bork
typeset a[1]=blarg
typeset a[2]=zooj
typeset a[3]=blah
In Bash and Zsh, the alternate value assignment parameter expansion ( ${arr[idx]:=foo}
) evaluates the subscript twice, first to determine whether to expand the alternate, and second
to determine the index to assign the alternate to. See
evaluation
order .
$ : ${_[$(echo $RANDOM >&2)1]:=$(echo hi >&2)}
13574
hi
14485
In Zsh, arrays are indexed starting at 1 in its default mode. Emulation modes are required
in order to get any kind of portability.
Zsh and mksh do not support compound assignment arguments to typeset .
Ksh88 didn't support modern compound array assignment syntax. The original (and most portable)
way to assign multiple elements is to use the set -A name arg1 arg2 syntax. This
is supported by almost all shells that support ksh-like arrays except for Bash. Additionally,
these shells usually support an optional -s argument to set which performs
lexicographic sorting on either array elements or the positional parameters. Bash has no built-in
sorting ability other than the usual comparison operators.
$ ksh -c 'set -A arr -- foo bar bork baz; typeset -p arr' # Classic array assignment syntax
typeset -a arr=(foo bar bork baz)
$ ksh -c 'set -sA arr -- foo bar bork baz; typeset -p arr' # Native sorting!
typeset -a arr=(bar baz bork foo)
$ mksh -c 'set -sA arr -- foo "[3]=bar" "[2]=baz" "[7]=bork"; typeset -p arr' # Probably a bug. I think the maintainer is aware of it.
set -A arr
typeset arr[2]=baz
typeset arr[3]=bar
typeset arr[7]=bork
typeset arr[8]=foo
Evaluation order for assignments involving arrays varies significantly depending on context.
Notably, the order of evaluating the subscript or the value first can change in almost every shell
for both expansions and arithmetic variables. See
evaluation
order for details.
Bash 4.1.* and below cannot use negative subscripts to address array indexes relative to the
highest-numbered index. You must use the subscript expansion, i.e. "${arr[@]:(-n):1}"
, to expand the nth-last element (or the next-highest indexed after n if arr[n]
is unset). In Bash 4.2, you may expand (but not assign to) a negative index. In Bash 4.3, ksh93,
and zsh, you may both assign and expand negative offsets.
ksh93 also has an additional slice notation: "${arr[n..m]}" where n
and m are arithmetic expressions. These are needed for use with multi-dimensional
arrays.
Assigning or referencing negative indexes in mksh causes wrap-around. The max index appears
to be UINT_MAX , which would be addressed by arr[-1] .
So far, Bash's -v var test doesn't support individual array subscripts. You may
supply an array name to test whether an array is defined, but can't check an element. ksh93's
-v supports both. Other shells lack a -v test.
Bugs
Fixed in 4.3 Bash 4.2.* and earlier considers each chunk of a compound assignment, including
the subscript for globbing. The subscript part is considered quoted, but any unquoted glob characters
on the right-hand side of the [ ]= will be clumped with the subscript and counted
as a glob. Therefore, you must quote anything on the right of the = sign. This is
fixed in 4.3, so that each subscript assignment statement is expanded following the same rules
as an ordinary assignment. This also works correctly in ksh93.
Each word (the entire assignment) is subject to globbing and brace expansion. This appears to
trigger the same strange expansion mode as let , eval , other declaration
commands, and maybe more.
Fixed in 4.3 Indirection combined with another modifier expands arrays to a single word.
Evaluation order Here are some of the nasty details of array assignment evaluation order.
You can use this testcase code
to generate these results.
Each testcase prints evaluation order for indexed array assignment
contexts. Each context is tested for expansions (represented by digits) and
arithmetic (letters), ordered from left to right within the expression. The
output corresponds to the way evaluation is re-ordered for each shell:
a[ $1 a ]=${b[ $2 b ]:=${c[ $3 c ]}} No attributes
a[ $1 a ]=${b[ $2 b ]:=c[ $3 c ]} typeset -ia a
a[ $1 a ]=${b[ $2 b ]:=c[ $3 c ]} typeset -ia b
a[ $1 a ]=${b[ $2 b ]:=c[ $3 c ]} typeset -ia a b
(( a[ $1 a ] = b[ $2 b ] ${c[ $3 c ]} )) No attributes
(( a[ $1 a ] = ${b[ $2 b ]:=c[ $3 c ]} )) typeset -ia b
a+=( [ $1 a ]=${b[ $2 b ]:=${c[ $3 c ]}} [ $4 d ]=$(( $5 e )) ) typeset -a a
a+=( [ $1 a ]=${b[ $2 b ]:=c[ $3 c ]} [ $4 d ]=${5}e ) typeset -ia a
bash: 4.2.42(1)-release
2 b 3 c 2 b 1 a
2 b 3 2 b 1 a c
2 b 3 2 b c 1 a
2 b 3 2 b c 1 a c
1 2 3 c b a
1 2 b 3 2 b c c a
1 2 b 3 c 2 b 4 5 e a d
1 2 b 3 2 b 4 5 a c d e
ksh93: Version AJM 93v- 2013-02-22
1 2 b b a
1 2 b b a
1 2 b b a
1 2 b b a
1 2 3 c b a
1 2 b b a
1 2 b b a 4 5 e d
1 2 b b a 4 5 d e
mksh: @(#)MIRBSD KSH R44 2013/02/24
2 b 3 c 1 a
2 b 3 1 a c
2 b 3 c 1 a
2 b 3 c 1 a
1 2 3 c a b
1 2 b 3 c a
1 2 b 3 c 4 5 e a d
1 2 b 3 4 5 a c d e
zsh: 5.0.2
2 b 3 c 2 b 1 a
2 b 3 2 b 1 a c
2 b 1 a
2 b 1 a
1 2 3 c b a
1 2 b a
1 2 b 3 c 2 b 4 5 e
1 2 b 3 2 b 4 5
Intro
The day will come when you want to give arguments to your scripts. These
arguments are known as positional parameters . Some relevant special parameters are described
below:
Parameter(s)
Description
$0
the first positional parameter, equivalent to
argv[0]
in C, see
the
first argument
$FUNCNAME
the function name (
attention
: inside a function,
$0
is still
the
$0
of the shell, not the function name)
all positional parameters except
$0
, see
mass usage
$@
all positional parameters except
$0
, see
mass usage
$#
the number of arguments, not counting
$0
These positional parameters reflect exactly what was given to the script when it was
called.
Option-switch parsing (e.g.
-h
for displaying help) is not performed at this
point.
See also
the
dictionary entry for "parameter"
.
The first argument
The very first argument you
can access is referenced as
$0
. It is usually set to the script's name exactly as
called, and it's set on shell initialization:
Testscript
- it just echos
$0
:
#!/bin/bash
echo "$0"
You see,
$0
is always set to the name the script is called with (
$
is the prompt ):
> ./testscript
./testscript
> /usr/bin/testscript
/usr/bin/testscript
However, this isn't true for login shells:
> echo "$0"
-bash
In other terms,
$0
is not a positional parameter, it's a special parameter
independent from the positional parameter list. It can be set to anything. In the ideal case
it's the pathname of the script, but since this gets set on invocation, the invoking program
can easily influence it (the
login
program does that for login shells, by
prefixing a dash, for example).
Inside a function,
$0
still behaves as described above. To get the function
name, use
$FUNCNAME
.
Shifting
The builtin command
shift
is
used to change the positional parameter values:
$1
will be discarded
$2
will become
$1
$3
will become
$2
in general:
$N
will become
$N-1
The command can take a number as argument: Number of positions to shift. e.g.
shift
4
shifts
$5
to
$1
.
Using them
Enough theory, you want
to access your script-arguments. Well, here we go.
One by one
One way is to access
specific parameters:
While useful in another situation, this way is lacks flexibility. The maximum number of
arguments is a fixedvalue - which is a bad idea if you write a script that takes many filenames
as arguments.
⇒ forget that one
Loops
There are several ways to loop through the positional
parameters.
You can code a
C-style for-loop
using
$#
as
the end value. On every iteration, the
shift
-command is used to shift the
argument list:
numargs=$#
for ((i=1 ; i <= numargs ; i++))
do
echo "$1"
shift
done
Not very stylish, but usable. The
numargs
variable is used to store the initial
value of
$#
because the shift command will change it as the script runs.
Another way to iterate one argument at a time is the
for
loop without a given
wordlist. The loop uses the positional parameters as a wordlist:
for arg
do
echo "$arg"
done
Advantage:
The positional parameters will be preserved
The next method is similar to the first example (the
for
loop), but it doesn't
test for reaching
$#
. It shifts and checks if
$1
still expands to
something, using the
test command
:
while [ "$1" ]
do
echo "$1"
shift
done
Looks nice, but has the disadvantage of stopping when
$1
is empty
(null-string). Let's modify it to run as long as
$1
is defined (but may be null),
using
parameter expansion for an
alternate value
:
while [ "${1+defined}" ]; do
echo "$1"
shift
done
Getopts
There is a
small tutorial dedicated to
''getopts''
(
under construction
).
Mass usage
All Positional
Parameters
Sometimes it's necessary to just "relay" or "pass" given arguments to another
program. It's very inefficient to do that in one of these loops, as you will destroy integrity,
most likely (spaces!).
The shell developers created
$*
and
$@
for this purpose.
As overview:
Syntax
Effective result
$*
$1 $2 $3 ${N}
$@
$1 $2 $3 ${N}
"$*"
"$1c$2c$3c c${N}"
"$@"
"$1" "$2" "$3" "${N}"
Without being quoted (double quotes), both have the same effect: All positional parameters
from
$1
to the last one used are expanded without any special handling.
When the
$*
special parameter is double quoted, it expands to the equivalent
of:
"$1c$2c$3c$4c ..$N"
, where 'c' is the first character of
IFS
.
But when the
$@
special parameter is used inside double quotes, it expands to
the equivanent of
"$1" "$2" "$3" "$4" .. "$N"
which reflects all positional parameters as they were set initially and passed to the script
or function. If you want to re-use your positional parameters to call another program (for
example in a wrapper-script), then this is the choice for you, use double quoted
"$@"
.
Well, let's just say: You almost always want a quoted
"$@"
!
Range Of
Positional Parameters
Another way to mass expand the positional parameters is similar to
what is possible for a range of characters using
substring expansion
on normal
parameters and the mass expansion range of
arrays
.
${@:START:COUNT}
${*:START:COUNT}
"${@:START:COUNT}"
"${*:START:COUNT}"
The rules for using
@
or
*
and quoting are the same as above. This
will expand
COUNT
number of positional parameters beginning at
START
.
COUNT
can be omitted (
${@:START}
), in which case, all positional
parameters beginning at
START
are expanded.
If
START
is negative, the positional parameters are numbered in reverse
starting with the last one.
COUNT
may not be negative, i.e. the element count may not be decremented.
Example:
START at the last positional parameter:
echo "${@: -1}"
Attention
: As of Bash 4, a
START
of
0
includes the
special parameter
$0
, i.e. the shell name or whatever $0 is set to, when the
positional parameters are in use. A
START
of
1
begins at
$1
. In Bash 3 and older, both
0
and
1
began at
$1
.
Setting Positional Parameters
Setting positional parameters with
command line arguments, is not the only way to set them. The
builtin command, set
may be used to
"artificially" change the positional parameters from inside the script or function:
set "This is" my new "set of" positional parameters
# RESULTS IN
# $1: This is
# $2: my
# $3: new
# $4: set of
# $5: positional
# $6: parameters
It's wise to signal "end of options" when setting positional parameters this way. If not,
the dashes might be interpreted as an option switch by
set
itself:
# both ways work, but behave differently. See the article about the set command!
set -- ...
set - ...
Alternately this will also preserve any verbose (-v) or tracing (-x) flags, which may
otherwise be reset by
set
set -$- ...
Production examples
Using a while loop
To make your program accept options as
standard command syntax:
COMMAND [options] <params>
# Like 'cat -A file.txt'
See simple option parsing code below. It's not that flexible. It doesn't auto-interpret
combined options (-fu USER) but it works and is a good rudimentary way to parse your
arguments.
#!/bin/sh
# Keeping options in alphabetical order makes it easy to add more.
while :
do
case "$1" in
-f | --file)
file="$2" # You may want to check validity of $2
shift 2
;;
-h | --help)
display_help # Call your function
# no shifting needed here, we're done.
exit 0
;;
-u | --user)
username="$2" # You may want to check validity of $2
shift 2
;;
-v | --verbose)
# It's better to assign a string, than a number like "verbose=1"
# because if you're debugging the script with "bash -x" code like this:
#
# if [ "$verbose" ] ...
#
# You will see:
#
# if [ "verbose" ] ...
#
# Instead of cryptic
#
# if [ "1" ] ...
#
verbose="verbose"
shift
;;
--) # End of all options
shift
break;
-*)
echo "Error: Unknown option: $1" >&2
exit 1
;;
*) # No more options
break
;;
esac
done
# End of file
Filter unwanted options with a wrapper script
This simple wrapper enables filtering
unwanted options (here:
-a
and
–all
for
ls
) out
of the command line. It reads the positional parameters and builds a filtered array consisting
of them, then calls
ls
with the new option set. It also respects the
–
as "end of options" for
ls
and doesn't change anything after
it:
#!/bin/bash
# simple ls(1) wrapper that doesn't allow the -a option
options=() # the buffer array for the parameters
eoo=0 # end of options reached
while [[ $1 ]]
do
if ! ((eoo)); then
case "$1" in
-a)
shift
;;
--all)
shift
;;
-[^-]*a*|-a?*)
options+=("${1//a}")
shift
;;
--)
eoo=1
options+=("$1")
shift
;;
*)
options+=("$1")
shift
;;
esac
else
options+=("$1")
# Another (worse) way of doing the same thing:
# options=("${options[@]}" "$1")
shift
fi
done
/bin/ls "${options[@]}"
The shell-developers invented $* and $@ for this purpose.
Without being quoted (double-quoted), both have the same effect: All positional parameters
from $1 to the last used one >are expanded, separated by the first character of IFS
(represented by "c" here, but usually a space):
$1c$2c$3c$4c........$N
Without double quotes, $* and $@ are expanding the positional parameters separated by only
space, not by IFS.
Once upon a time I was playing with
Windows Power Shell (WPSH) and discovered a very useful function for changing to commonly
visited directories. The function, called "go", which was written by
Peter Provost
, grew on me as I used WPSH ! so much so that I
decided to implement it in bash after my WPSH experiments ended.
The problem is simple. Users of command line interfaces tend to visit the same directories
repeatedly over the course of their work, and having a way to get to these oft-visited places
without a lot of typing is nice.
The solution entails maintaining a map of key-value pairs, where each key is an alias to a
value, which is itself a commonly visited directory. The "go" function will, when given a
string input, look that string up in the map, and if the key is found, move to the directory
indicated by the value.
The map itself is just a specially formatted text file with one key-value entry per line,
while each entry is separated into key-value components by the first encountered colon, with
the left side being interpreted as the entry's key and the right side as its value.
Keys are typically short easily typed strings, while values can be arbitrary path names, and
even contain references to environment variables. The effect of this is that "go" can respond
dynamically to the environment.
Finally, the "go" function finds the map file by referring to an environment variable called
"GO_FILE", which should have as its value the full path to the map.
Before I ran into this idea I had maintained a number of shell aliases, (i.e. alias
dwork='cd $WORK_DIR'), to achieve a similar end, but every time I wanted to add a new location
I was forced to edit my .bashrc file. Then I would subsequently have to resource it or enter
the alias again on the command line. Since I typically keep multiple shells open this is just a
pain, and so I didn't add new aliases very often. With this method, a new entry in the "go
file" is immediately available to all open shells without any extra finagling.
This functionality is related to CDPATH, but they are not replacements for one another.
Indeed CDPATH is the more appropriate solution when you want to be able to "cd" to all or most
of the sub-directories of some parent. On the other hand, "go" works very well for getting to a
single directory easily. For example you might not want "/usr/local" in your CDPATH and still
want an abbreviated way of getting to "/usr/local/share".
The code for the go function, as well as some brief documentation follows.
##############################################
# GO
#
# Inspired by some Windows Power Shell code
# from Peter Provost (peterprovost.org)
#
# Here are some examples entries:
# work:${WORK_DIR}
# source:${SOURCE_DIR}
# dev:/c/dev
# object:${USER_OBJECT_DIR}
# debug:${USER_OBJECT_DIR}/debug
###############################################
export GO_FILE=~/.go_locations
function go
{
if [ -z "$GO_FILE" ]
then
echo "The variable GO_FILE is not set."
return
fi
if [ ! -e "$GO_FILE" ]
then
echo "The 'go file': '$GO_FILE' does not exist."
return
fi
dest=""
oldIFS=${IFS}
IFS=$'\n'
for entry in `cat ${GO_FILE}`
do
if [ "$1" = ${entry%%:*} ]
then
#echo $entry
dest=${entry##*:}
break
fi
done
if [ -n "$dest" ]
then
# Expand variables in the go file.
#echo $dest
cd `eval echo $dest`
else
echo "Invalid location, valid locations are:"
cat $GO_FILE
fi
export IFS=${oldIFS}
}
Using
declare
(which will
detect
when it was called from within a
function and make the variable(s) local).
myfunc
()
local
var
=VALUE
# alternative, only when used INSIDE a function
declare
var
=VALUE
...
The
local
keyword (or declaring a variable using the
declare
command)
tags a variable to be treated
completely local and separate
inside the function where
it was declared:
foo
=external
printvalue
()
local
foo
=internal
echo
$foo
# this will print "external"
echo
$foo
# this will print "internal"
printvalue
# this will print - again - "external"
echo
$foo
The environment space is not directly related to the topic about scope, but it's worth
mentioning.
Every UNIX® process has a so-called
environment
. Other items, in addition to
variables, are saved there, the so-called
environment variables
. When a child process
is created (in Bash e.g. by simply executing another program, say
ls
to list
files), the whole environment
including the environment variables
is copied to the new
process. Reading that from the other side means: Only variables that are part of the
environment are available in the child process.
A variable can be tagged to be part of the environment using the
export
command:
# create a new variable and set it:
# -> This is a normal shell variable, not an environment variable!
myvariable
"Hello world."
# make the variable visible to all child processes:
# -> Make it an environment variable: "export" it
export
myvariable
Remember that the
exported
variable is a copy . There is no provision to "copy it
back to the parent." See the article about
Bash in the process tree
!
1)
under specific
circumstances, also by the shell itself
:
(colon) and input redirection. The
:
does nothing, it's a pseudo
command, so it does not care about standard input. In the following code example, you want to
test mail and logging, but not dump the database, or execute a shutdown:
#!/bin/bash
# Write info mails, do some tasks and bring down the system in a safe way
echo
"System halt requested"
mail
-s
"System halt"
netadmin
example.com
logger
-t
SYSHALT
"System halt requested"
##### The following "code block" is effectively ignored
:
<<
"SOMEWORD"
etc
init.d
mydatabase clean_stop
mydatabase_dump
var
db
db1
mnt
fsrv0
backups
db1
logger
-t
SYSHALT
"System halt: pre-shutdown actions done, now shutting down the system"
shutdown
-h
NOW
SOMEWORD
##### The ignored codeblock ends here
What happened? The
:
pseudo command was given some input by redirection (a
here-document) - the pseudo command didn't care about it, effectively, the entire block was
ignored.
The here-document-tag was quoted here to avoid substitutions in the "commented" text! Check
redirection with
here-documents
for more
Besides many bugfixes since Bash 3.2, Bash 4 will bring some interesting new features for
shell users and scripters. See also Bash changes for a small general
overview with more details.
Not all of the changes and news are included here, just the biggest or most interesting
ones. The changes to completion, and the readline component are not covered. Though, if you're
familiar with these parts of Bash (and Bash 4), feel free to write a chapter here.
The complete list of fixes and changes is in the CHANGES or NEWS file of your Bash 4
distribution.
The current available stable version is 4.2 release (February 13, 2011):
New or changed commands and keywordsThe new "coproc" keyword Bash 4
introduces the concepts of coprocesses, a well known feature of other shells. The basic concept
is simple: It will start any command in the background and set up an array that is populated
with accessible files that represent the filedescriptors of the started process.
In other words: It lets you start a process in background and communicate with its input and
output data streams.
See The coproc
keywordThe new "mapfile" builtin The mapfile builtin is able to map
the lines of a file directly into an array. This avoids having to fill an array yourself using
a loop. It enables you to define the range of lines to read, and optionally call a callback,
for example to display a progress bar.
See: The
mapfile builtin commandChanges to the "case" keyword The case
construct understands two new action list terminators:
The ;& terminator causes execution to continue with the next action list
(rather than terminate the case construct).
The ;;& terminator causes the case construct to test the next
given pattern instead of terminating the whole execution.
See The case
statementChanges to the "declare" builtin The -p option now prints all
attributes and values of declared variables (or functions, when used with -f ).
The output is fully re-usable as input.
The new option -l declares a variable in a way that the content is converted to
lowercase on assignment. For uppercase, the same applies to -u . The option
-c causes the content to be capitalized before assignment.
declare -A declares associative arrays (see below). Changes to the "read"
builtin The read builtin command has some interesting new features.
The -t option to specify a timeout value has been slightly tuned. It now
accepts fractional values and the special value 0 (zero). When -t 0 is specified,
read immediately returns with an exit status indicating if there's data waiting or
not. However, when a timeout is given, and the read builtin times out, any partial
data recieved up to the timeout is stored in the given variable, rather than lost. When a
timeout is hit, read exits with a code greater than 128.
A new option, -i , was introduced to be able to preload the input buffer with
some text (when Readline is used, with -e ). The user is able to change the text,
or press return to accept it.
See The read
builtin commandChanges to the "help" builtin The builtin itself didn't change much,
but the data displayed is more structured now. The help texts are in a better format, much
easier to read.
There are two new options: -d displays the summary of a help text,
-m displays a manpage-like format. Changes to the "ulimit" builtin Besides
the use of the 512 bytes blocksize everywhere in POSIX mode, ulimit supports two
new limits: -b for max socket buffer size and -T for max number of
threads. ExpansionsBrace Expansion The brace expansion was tuned to provide
expansion results with leading zeros when requesting a row of numbers.
See Brace
expansionParameter Expansion Methods to modify the case on expansion time have been
added.
On expansion time you can modify the syntax by adding operators to the parameter name.
See Case
modification on parameter expansionSubstring expansion When using substring
expansion on the positional parameters, a starting index of 0 now causes $0 to be prepended to
the list (if the positional parameters are used). Before, this expansion started with $1:
# this should display $0 on Bash v4, $1 on Bash v3
echo ${@:0:1}
Globbing There's a new shell option globstar . When
enabled, Bash will perform recursive globbing on ** – this means it matches
all directories and files from the current position in the filesystem, rather than only the
current level.
The new shell option dirspell enables
spelling corrections on directory names during globbing.
See Pathname
expansion (globbing)Associative Arrays Besides the classic method of integer
indexed arrays, Bash 4 supports associative arrays.
An associative array is an array indexed by an arbitrary string, something like
See ArraysRedirection There is a new &>> redirection operator, which
appends the standard output and standard error to the named file. This is the same as the good
old >>FILE 2>&1 notation.
The parser now understands |& as a synonym for 2>&1 | ,
which redirects the standard error for a command through a pipe.
If a command is not found, the shell attempts to execute a shell function named
command_not_found_handle , supplying the command words as the function
arguments. This can be used to display userfriendly messages or perform different command
searches
The behaviour of the set -e ( errexit ) mode was changed, it
now acts more intuitive (and is better documented in the manpage).
The output target for the xtrace ( set -x / set +x
) feature is configurable since Bash 4.1 (previously, it was fixed to stderr ):
a variable named BASH_XTRACEFD can be set to
the filedescriptor that should get the output
Bash 4.1 is able to log the history to syslog (only to be enabled at compile time in
config-top.h )
Update (Jan 26, 2016): I posted a
short update
about my
usage of persistent history.
For someone spending most of his time in front of a Linux terminal, history is very
important. But traditional bash history has a number of limitations, especially when multiple
terminals are involved (I sometimes have dozens open). Also it's not very good at preserving
just the history you're interested in across reboots.
There are many approaches to improve the situation; here I want to discuss one I've been
using very successfully in the past few months - a simple "persistent history" that keeps track
of history across terminal instances, saving it into a dot-file in my home directory (
~/.persistent_history
). All commands, from all terminal instances, are saved there,
forever. I found this tremendously useful in my work - it saves me time almost every day.
Why does it go into a
separate
history and not the
main
one which is
accessible by all the existing history manipulation tools? Because IMHO the latter is still
worthwhile to be kept separate for the simple need of bringing up recent commands in a single
terminal, without mixing up commands from other terminals. While the terminal is open, I want
the press "Up" and get the previous command, even if I've executed a 1000 other commands in
other terminal instances in the meantime.
Persistent history is very easy to set up. Here's the relevant portion of my
~/.bashrc
:
log_bash_persistent_history()
{
[[
$(history 1) =~ ^\ *[0-9]+\ +([^\ ]+\ [^\ ]+)\ +(.*)$
]]
local date_part="${BASH_REMATCH[1]}"
local command_part="${BASH_REMATCH[2]}"
if [ "$command_part" != "$PERSISTENT_HISTORY_LAST" ]
then
echo $date_part "|" "$command_part" >> ~/.persistent_history
export PERSISTENT_HISTORY_LAST="$command_part"
fi
}
# Stuff to do on PROMPT_COMMAND
run_on_prompt_command()
{
log_bash_persistent_history
}
PROMPT_COMMAND="run_on_prompt_command"
The format of the history file created by this is:
2013-06-09 17:48:11 | cat ~/.persistent_history
2013-06-09 17:49:17 | vi /home/eliben/.bashrc
2013-06-09 17:49:23 | ls
Note that an environment variable is used to avoid useless duplication (i.e. if I run
ls
twenty times in a row, it will only be recorded once).
OK, so we have
~/.persistent_history
, how do we
use
it? First, I should
say that it's not used very often, which kind of connects to the point I made earlier about
separating it from the much higher-use regular command history. Sometimes I just look into the
file with
vi
or
tail
, but mostly this alias does the trick for me:
alias phgrep='cat ~/.persistent_history|grep --color'
The alias name mirrors another alias I've been using for ages:
alias hgrep='history|grep --color'
Another tool for managing persistent history is a trimmer. I said earlier this file keeps
the history "forever", which is a scary word - what if it grows too large? Well, first of all -
worry not. At work my history file grew to about 2 MB after 3 months of heavy usage, and 2 MB
is pretty small these days. Appending to the end of a file is very, very quick (I'm pretty sure
it's a constant-time operation) so the size doesn't matter much. But trimming is easy:
tail -20000 ~/.persistent_history | tee ~/.persistent_history
Trims to the last 20000 lines. This should be sufficient for at least a couple of months of
history, and your workflow should not really rely on more than that :-)
Finally, what's the use of having a tool like this without employing it to collect some
useless statistics. Here's a histogram of the 15 most common commands I've used on my home
machine's terminal over the past 3 months:
ls : 865
vi : 863
hg : 741
cd : 512
ll : 289
pss : 245
hst : 200
python : 168
make : 167
git : 148
time : 94
python3 : 88
./python : 88
hpu : 82
cat : 80
Some explanation:
hst
is an alias for
hg st
.
hpu
is an alias for
hg pull -u
.
pss
is my
awesome pss tool
, and is the reason why you don't see any
calls to
grep
and
find
in the list. The proportion of Mercurial vs. git
commands is likely to change in the very
The bash session that is saved is the one for the terminal that is closed the latest.
If you want to save the commands for every session, you could use the trick explained
here.
export PROMPT_COMMAND='history -a'
To quote the manpage: "If set, the value is executed as a command prior to issuing each primary
prompt."
So every time my command has finished, it appends the unwritten history item to
~/.bash
ATTENTION: If you use multiple shell sessions and do not use this trick, you need to write
the history manually to preserver it using the command history -a
"... Have you ever left your terminal logged in, only to find when you came back to it that a (supposed) friend had typed "rm -rf ~/*" and was hovering over the keyboard with threats along the lines of "lend me a fiver 'til Thursday, or I hit return"? Undoubtedly the person in question would not have had the nerve to inflict such a trauma upon you, and was doing it in jest. So you've probably never experienced the worst of such disasters.... ..."
"... I can't remember what happened in the succeeding minutes; my memory is just a blur. ..."
"... (We take dumps of the user files every Thursday; by Murphy's Law this had to happen on a Wednesday). ..."
"... By yet another miracle of good fortune, the terminal from which the damage had been done was still su'd to root (su is in /bin, remember?), so at least we stood a chance of all this working. ..."
[I had intended to leave the discussion of "rm -r *" behind after the compendium I sent earlier,
but I couldn't resist this one.
I also received a response from rutgers!seismo!hadron!jsdy (Joseph S. D. Yao) that described
building a list of "dangerous" commands into a shell and dropping into a query when a glob turns
up. They built it in so it couldn't be removed, like an alias. Anyway, on to the story! RWH.]
I didn't see the message that opened up the discussion on rm, but thought you might like to read
this sorry tale about the perils of rm....
(It was posted to net.unix some time ago, but I think our postnews didn't send it as far as
it should have!)
Have you ever left your terminal logged in, only to find when you came back to it that a (supposed)
friend had typed "rm -rf ~/*" and was hovering over the keyboard with threats along the lines
of "lend me a fiver 'til Thursday, or I hit return"? Undoubtedly the person in question would
not have had the nerve to inflict such a trauma upon you, and was doing it in jest. So you've
probably never experienced the worst of such disasters....
It was a quiet Wednesday afternoon. Wednesday, 1st October, 15:15 BST, to be precise, when
Peter, an office-mate of mine, leaned away from his terminal and said to me, "Mario, I'm having
a little trouble sending mail." Knowing that msg was capable of confusing even the most capable
of people, I sauntered over to his terminal to see what was wrong. A strange error message of
the form (I forget the exact details) "cannot access /foo/bar for userid 147" had been issued
by msg.
My first thought was "Who's userid 147?; the sender of the message, the destination, or what?"
So I leant over to another terminal, already logged in, and typed
grep 147 /etc/passwd
only to receive the response
/etc/passwd: No such file or directory.
Instantly, I guessed that something was amiss. This was confirmed when in response to
ls /etc
I got
ls: not found.
I suggested to Peter that it would be a good idea not to try anything for a while, and went
off to find our system manager. When I arrived at his office, his door was ajar, and within ten
seconds I realised what the problem was. James, our manager, was sat down, head in hands, hands
between knees, as one whose world has just come to an end. Our newly-appointed system programmer,
Neil, was beside him, gazing listlessly at the screen of his terminal. And at the top of the screen
I spied the following lines:
# cd
# rm -rf *
Oh, *****, I thought. That would just about explain it.
I can't remember what happened in the succeeding minutes; my memory is just a blur. I do remember
trying ls (again), ps, who and maybe a few other commands beside, all to no avail. The next thing
I remember was being at my terminal again (a multi-window graphics terminal), and typing
cd /
echo *
I owe a debt of thanks to David Korn for making echo a built-in of his shell; needless to say,
/bin, together with /bin/echo, had been deleted. What transpired in the next few minutes was that
/dev, /etc and /lib had also gone in their entirety; fortunately Neil had interrupted rm while
it was somewhere down below /news, and /tmp, /usr and /users were all untouched.
Meanwhile James had made for our tape cupboard and had retrieved what claimed to be a dump
tape of the root filesystem, taken four weeks earlier. The pressing question was, "How do we recover
the contents of the tape?". Not only had we lost /etc/restore, but all of the device entries for
the tape deck had vanished. And where does mknod live?
You guessed it, /etc.
How about recovery across Ethernet of any of this from another VAX? Well, /bin/tar had gone,
and thoughtfully the Berkeley people had put rcp in /bin in the 4.3 distribution. What's more,
none of the Ether stuff wanted to know without /etc/hosts at least. We found a version of cpio
in /usr/local, but that was unlikely to do us any good without a tape deck.
Alternatively, we could get the boot tape out and rebuild the root filesystem, but neither
James nor Neil had done that before, and we weren't sure that the first thing to happen would
be that the whole disk would be re-formatted, losing all our user files. (We take dumps of the
user files every Thursday; by Murphy's Law this had to happen on a Wednesday).
Another solution might be to borrow a disk from another VAX, boot off that, and tidy up later,
but that would have entailed calling the DEC engineer out, at the very least. We had a number
of users in the final throes of writing up PhD theses and the loss of a maybe a weeks' work (not
to mention the machine down time) was unthinkable.
So, what to do? The next idea was to write a program to make a device descriptor for the tape
deck, but we all know where cc, as and ld live. Or maybe make skeletal entries for /etc/passwd,
/etc/hosts and so on, so that /usr/bin/ftp would work. By sheer luck, I had a gnuemacs still running
in one of my windows, which we could use to create passwd, etc., but the first step was to create
a directory to put them in.
Of course /bin/mkdir had gone, and so had /bin/mv, so we couldn't rename /tmp to /etc. However,
this looked like a reasonable line of attack.
By now we had been joined by Alasdair, our resident UNIX guru, and as luck would have it, someone
who knows VAX assembler. So our plan became this: write a program in assembler which would either
rename /tmp to /etc, or make /etc, assemble it on another VAX, uuencode it, type in the uuencoded
file using my gnu, uudecode it (some bright spark had thought to put uudecode in /usr/bin), run
it, and hey presto, it would all be plain sailing from there. By yet another miracle of good fortune,
the terminal from which the damage had been done was still su'd to root (su is in /bin, remember?),
so at least we stood a chance of all this working.
Off we set on our merry way, and within only an hour we had managed to concoct the dozen or
so lines of assembler to create /etc. The stripped binary was only 76 bytes long, so we converted
it to hex (slightly more readable than the output of uuencode), and typed it in using my editor.
If any of you ever have the same problem, here's the hex for future reference:
I had a handy program around (doesn't everybody?) for converting ASCII hex to binary, and the
output of /usr/bin/sum tallied with our original binary. But hang on---how do you set execute
permission without /bin/chmod? A few seconds thought (which as usual, lasted a couple of minutes)
suggested that we write the binary on top of an already existing binary, owned by me...problem
solved.
So along we trotted to the terminal with the root login, carefully remembered to set the umask
to 0 (so that I could create files in it using my gnu), and ran the binary. So now we had a /etc,
writable by all.
From there it was but a few easy steps to creating passwd, hosts, services, protocols, (etc),
and then ftp was willing to play ball. Then we recovered the contents of /bin across the ether
(it's amazing how much you come to miss ls after just a few, short hours), and selected files
from /etc. The key file was /etc/rrestore, with which we recovered /dev from the dump tape, and
the rest is history.
Now, you're asking yourself (as I am), what's the moral of this story? Well, for one thing,
you must always remember the immortal words, DON'T PANIC. Our initial reaction was to reboot the
machine and try everything as single user, but it's unlikely it would have come up without /etc/init
and /bin/sh. Rational thought saved us from this one.
The next thing to remember is that UNIX tools really can be put to unusual purposes. Even without
my gnuemacs, we could have survived by using, say, /usr/bin/grep as a substitute for /bin/cat.
And the final thing is, it's amazing how much of the system you can delete without it falling
apart completely. Apart from the fact that nobody could login (/bin/login?), and most of the useful
commands had gone, everything else seemed normal. Of course, some things can't stand life without
say /etc/termcap, or /dev/kmem, or /etc/utmp, but by and large it all hangs together.
I shall leave you with this question: if you were placed in the same situation, and had the
presence of mind that always comes with hindsight, could you have got out of it in a simpler or
easier way?
"... Unfortunately, even today, people have not learned that lesson. Whether it's at work, at home, or talking with friends, I keep hearing stories of people losing hundreds to thousands of files, sometimes they lose data worth actual dollars in time and resources that were used to develop the information. ..."
"... "I lost all my files from my hard drive? help please? I did a project that took me 3 days and now i lost it, its powerpoint presentation, where can i look for it? its not there where i save it, thank you" ..."
"... Please someone help me I last week brought a Toshiba Satellite laptop running windows 7, to replace my blue screening Dell vista laptop. On plugged in my sumo external hard drive to copy over some much treasured photos and some of my (work – music/writing.) it said installing driver. it said completed I clicked on the hard drive and found a copy of my documents from the new laptop and nothing else. ..."
Back in college, I used to work just about every day as a computer cluster consultant. I remember
a month after getting promoted to a supervisor, I was in the process of training a new consultant
in the library computer cluster. Suddenly, someone tapped me on the shoulder, and when I turned around
I was confronted with a frantic graduate student – a 30-something year old man who I believe was
Eastern European based on his accent – who was nearly in tears.
"Please need help – my document is all gone and disk stuck!" he said as he frantically pointed
to his PC.
Now, right off the bat I could have told you three facts about the guy. One glance at the blue
screen of the archaic DOS-based version of Wordperfect told me that – like most of the other graduate
students at the time – he had not yet decided to upgrade to the newer, point-and-click style word
processing software. For some reason, graduate students had become so accustomed to all of the keyboard
hot-keys associated with typing in a DOS-like environment that they all refused to evolve into point-and-click
users.
The second fact, gathered from a quick glance at his blank document screen and the sweat on his
brow told me that he had not saved his document as he worked. The last fact, based on his thick accent,
was that communicating the gravity of his situation wouldn't be easy. In fact, it was made even worse
by his answer to my question when I asked him when he last saved.
"I wrote 30 pages."
Calculated out at about 600 words a page, that's 18000 words. Ouch.
Then he pointed at the disk drive. The floppy disk was stuck, and from the marks on the drive
he had clearly tried to get it out with something like a paper clip. By the time I had carefully
fished the torn and destroyed disk out of the drive, it was clear he'd never recover anything off
of it. I asked him what was on it.
"My thesis."
I gulped. I asked him if he was serious. He was. I asked him if he'd made any backups. He hadn't.
Making Backups of Backups
If there is anything I learned during those early years of working with computers (and the people
that use them), it was how critical it is to not only save important stuff, but also to save it in
different places. I would back up floppy drives to those cool new zip drives as well as the local
PC hard drive. Never, ever had a single copy of anything.
Unfortunately, even today, people have not learned that lesson. Whether it's at work, at home,
or talking with friends, I keep hearing stories of people losing hundreds to thousands of files,
sometimes they lose data worth actual dollars in time and resources that were used to develop the
information.
To drive that lesson home, I wanted to share a collection of stories that I found around the Internet
about some recent cases were people suffered that horrible fate – from thousands of files to entire
drives worth of data completely lost. These are people where the only remaining option is to start
running recovery software and praying, or in other cases paying thousands of dollars to a data recovery
firm and hoping there's something to find.
Not Backing Up Projects
The first example comes from Yahoo Answers , where a user that only provided a "?" for a user
name (out of embarrassment probably), posted:
"I lost all my files from my hard drive? help please? I did a project that took me 3 days
and now i lost it, its powerpoint presentation, where can i look for it? its not there where i
save it, thank you"
The folks answering immediately dove into suggesting that the person run recovery software, and
one person suggested that the person run a search on the computer for *.ppt.
... ... ...
Doing Backups Wrong
Then, there's a scenario of actually trying to do a backup and doing it wrong, losing all of the
files on the original drive. That was the case for the person who posted on
Tech Support Forum , that after purchasing a brand new Toshiba Laptop and attempting to transfer
old files from an external hard drive, inadvertently wiped the files on the hard drive.
Please someone help me I last week brought a Toshiba Satellite laptop running windows 7,
to replace my blue screening Dell vista laptop. On plugged in my sumo external hard drive to copy
over some much treasured photos and some of my (work – music/writing.) it said installing driver.
it said completed I clicked on the hard drive and found a copy of my documents from the new laptop
and nothing else.
While the description of the problem is a little broken, from the sound of it, the person thought
they were backing up from one direction, while they were actually backing up in the other direction.
At least in this case not all of the original files were deleted, but a majority were.
"... as a general observation, large organizations/corporations tend to opt for incredibly expensive, incredibly complex, incredibly overblown backup "solutions" sold to them by vendors rather than using the stock, well-tested, reliable tools that they already have. ..."
"... in over 30 years of working in the field, the second-worst product I have ever had the misfortune to deal with is Legato (now EMC) NetWorker. ..."
Here's a random story, found via Kottke
, highlighting how Pixar came very close to losing a very large portion of Toy Story 2 , because someone did an rm *
(non geek: "remove all" command). And that's when they realized that their backups hadn't been working for a month. Then, the technical
director of the film noted that, because she wanted to see her family and kids, she had been making copies of the entire film and
transferring it to her home computer. After a careful trip from the Pixar offices to her home and back, they discovered that, indeed,
most of the film was saved:
Now, mostly, this is just an amusing little anecdote, but two things struck me:
How in the world do they not have more "official" backups of something as major as Toy Story 2 . In the clip they admit
that it was potentially 20 to 30 man-years of work that may have been lost. It makes no sense to me that this would include a single
backup system. I wonder if the copy, made by technical director Galyn Susman, was outside of corporate policy. You would have to
imagine that at a place like Pixar, there were significant concerns about things "getting out," and so the policy likely wouldn't
have looked all that kindly on copies being used on home computers.
The Mythbusters folks wonder if this story was
a little over-dramatized
, and others have
wondered how the technical director would have "multiple terabytes of source material" on her home computer back in 1999. That
resulted in an explanation from someone who was there that what was deleted was actually
the database containing the master copies of the characters, sets, animation, etc. rather than the movie itself. Of course, once
again, that makes you wonder how it is that no one else had a simple backup. You'd think such a thing would be backed up in dozens
of places around the globe for safe keeping...
Hans B PUFAL ( profile ), 18 May 2012
@ 5:53am
Reminds me of .... Some decades ago I was called to a customer site, a bank, to diagnose a computer problem. On my arrival
early in the morning I noted a certain panic in the air. On querying my hosts I was told that there had been an "issue" the previous
night and that they were trying, unsuccessfully, to recover data from backup tapes. The process was failing and panic ensued.
Though this was not the problem I had been called on to investigate, I asked some probing questions, made a short phone call,
and provided the answer, much to the customer's relief.
What I found was that for months if not years the customer had been performing backups of indexed sequential files, that is
data files with associated index files, without once verifying that the backed-up data could be recovered. On the first occasion
of a problem requiring such a recovery they discovered that they just did not work.
The answer? Simply recreate the index files from the data. For efficiency reasons (this was a LONG time ago) the index files
referenced the data files by physical disk addresses. When the backup tapes were restored the data was of course no longer at
the original place on the disk and the index files were useless. A simple procedure to recreate the index files solved the problem.
Clearly whoever had designed that system had never tested a recovery, nor read the documentation which clearly stated the issue
and its simple solution.
So here is a case of making backups, but then finding them flawed when needed.
Anonymous Coward , 18 May 2012 @ 6:00am
Re: Reminds me of .... That's why, in the IT world, you ALWAYS do a "dry run" when you want to deploy something, and you
monitor the heck out of critical systems.
Rich Kulawiec , 18 May 2012 @ 6:30am
Two notes on backups
1. Everyone who has worked in computing for any period of time has their own backup horror story. I'll spare you mine, but
note that as a general observation, large organizations/corporations tend to opt for incredibly expensive, incredibly complex,
incredibly overblown backup "solutions" sold to them by vendors rather than using the stock, well-tested, reliable tools that
they already have. (e.g., "why should we use dump, which is open-source/reliable/portable/tested/proven/efficient/etc., when
we could drop $40K on closed-source/proprietary/non-portable/slow/bulky software from a vendor?"
Okay, okay, one comment: in over 30 years of working in the field, the second-worst product I have ever had the misfortune
to deal with is Legato (now EMC) NetWorker.
2. Hollywood has a massive backup and archiving problem. How do we know? Because they keep telling us about it. There are a
series of self-promoting commercials that they run in theaters before movies, in which they talk about all of the old films that
are slowly decaying in their canisters in vast warehouses, and how terrible this is, and how badly they need charitable contributions
from the public to save these treasures of cinema before they erode into dust, etc.
Let's skip the irony of Hollywood begging for money while they're paying professional liar Chris Dodd millions and get to the
technical point: the easiest and cheapest way to preserve all of these would be to back them up to the Internet. Yes, there's
a one-time expense of cleaning up the analog versions and then digitizing them at high resolution, but once that's done, all the
copies are free. There's no need for a data center or elaborate IT infrastructure: put 'em on BitTorrent and let the world do
the work. Or give copies to the Internet Archive. Whatever -- the point is that once we get past the analog issues, the only reason
that this is a problem is that they
made it a problem by refusing to surrender control.
Re: Two notes on backups "Real Men don't make backups. They upload it via ftp and let the world mirror it." - Linus Torvalds
Anonymous Coward , 18 May 2012 @ 7:02am
What I suspect is that she was copying the rendered footage. If the footage was rendered at a resolution and rate fitting to DVD
spec, that'd put the raw footage at around 3GB to 4GB for a full 90min, which just might fit on the 10GB HDD that were available
back then on a laptop computer (remember how small OSes were back then).
Even losing just the rendered raw footage (or even
processed footage), would be a massive setback. It takes a long time across a lot of very powerful computers to render film quality
footage. If it was processed footage then it's even more valuable as that takes a lot of man hours of post fx to make raw footage
presentable to a consumer audience.
a retelling by Oren Jacob Oren Jacob, the Pixar director featured in the animation, has made a comment on the Quora post
that explains things in much more detail. The narration and animation was telling a story, as in storytelling. Despite the 99%
true caption at the end, a lot of details were left out which misrepresented what had happened. Still, it was a fun tale for anyone
who had dealt with backup problems. Oren Jacob's retelling in the comment makes it much more realistic and believable.
The terabytes level of data came from whoever posted the video on Quora. The video itself never mentions the actual amount of
data lost or the total amount the raw files represent. Oren says, vaguely, that it was much less than a terabyte. There were backups!
The last one was from two days previous to the delete event. The backup was flawed in that it produced files that when tested,
by rendering, exhibited errors.
They ended up patching a two-month old backup together with the home computer version (two weeks old). This was labor intensive
as some 30k files had to be individually checked.
The moral of the story.
Firstly, always test a restore at some point when implementing a backup system.
Secondly, don't panic! Panic can lead to further problems. They could well have introduced corruption in files by abruptly
unplugging the computer.
Thirdly, don't panic! Despite, somehow, deleting a large set of files these can be recovered apart from a backup system.
Deleting files, under Linux as well as just about any OS, only involves deleting the directory entries. There is software which
can recover those files as long as further use of the computer system doesn't end up overwriting what is now free space.
Mason Wheeler , 18 May 2012 @ 10:01am
Re: a retelling by Oren Jacob
Panic can lead to further problems. They could well have introduced corruption in files by abruptly unplugging the
computer.
What's worse? Corrupting some files or deleting all files?
In this case they were not dealing with unknown malware that was steadily erasing the
system as they watched. There was, apparently, a delete event at a single point in time that had repercussions that made things
disappear while people worked on the movie.
I'll bet things disappeared when whatever editing was being done required a file to be refreshed.
A refresh operation would make the related object disappear when the underlying file was no longer available.
Apart from the set of files that had already been deleted, more files could have been corrupted when the computer was unplugged.
Having said that, this occurred in 1999 when they were probably using the Ext2 filesystem under Linux. These days most everyone
uses a filesystem that includes journaling which protects against corruption that may occur when a computer loses power. Ext3
is a journaling filesystem and was introduced in 2001.
In 1998 I had to rebuild my entire home computer system. A power glitch introduced corruption in a Windows 95 system file and
use of a Norton recovery tool rendered the entire disk into a handful of unusable files. It took me ten hours to rebuild the OS
and re-install all the added hardware, software, and copy personal files from backup floppies. The next day I went out and bought
a UPS. Nowadays, sometimes the UPS for one of my computers will fail during one of the three dozen power outages a year I get
here. I no longer have problems with that because of journaling.
I've gotta story like this too Ive posted in athe past on Techdirt that I used to work for Ticketmaster. The is an interesting
TM story that I don't think ever made it into the public, so I will do it now.
Back in the 1980s each TM city was on an independent computer system (PDP unibus systems with RM05 or CDC9766 disk drives.
The drives were fixed removable boxes about the size of a washing machine, the removable disk platters about the size of the proverbial
breadbox. Each platter held 256mb formatted.
Each city had itts own operations policies, but generally, the systems ran with mirrored drives, the database was backed up
every night, archival copies were made monthly. In Chicago, where I worked, we did not have offsite backup in the 1980s. The Bay
Area had the most interesting system for offsite backup.
The Bay Area BASS operation, bought by TM in the mid 1980s, had a deal with a taxi driver. They would make their nightly backup
copies in house, and make an extra copy on a spare disk platter. Tis cabbie would come by the office about 2am each morning, and
they'd put the spare disk platter in his trunk, swapping it for the previous day's copy that had been his truck for 24 hours.
So, for the cost of about two platters ($700 at the time) and whatever cash they'd pay the cabbie, they had a mobile offsite copy
of their database circulating the Bay Area at all times.
When the World Series earthquake hit in October 1988, the TM office in downtown Oakland was badly damaged. The only copy of
the database that survived was the copy in the taxi cab.
That incident led TM corporate to establish much more sophisticated and redundant data redundancy policies.
Re: I've gotta story like this too I like that story. Not that it matters anymore, but taxi cab storage was probably a
bad idea. The disks were undoubtedly the "Winchester" type and when powered down the head would be parked on a "landing strip".
Still, subjecting these drives to jolts from a taxi riding over bumps in the road could damage the head or cause it to be misaligned.
You would have known though it that actually turned out to be a problem. Also, I wouldn't trust a taxi driver with the company
database. Although, that is probably due to an unreasonable bias towards cab drivers. I won't mention the numerous arguments with
them (not in the U.S.) over fares and the one physical fight with a driver who nearly ran me down while I was walking.
Huw Davies , 19 May 2012 @ 1:20am
Re: Re: I've gotta story like this too RM05s are removable pack drives. The heads stay in the washing machine size unit
- all you remove are the platters.
What I want to know is this... She copied bits of a movie to her home system... how hard did they have to pull in the leashes
to keep Disney's lawyers from suing her to infinity and beyond after she admitted she'd done so(never mind the fact that he doing
so saved them apparently years of work...)?
Good backup is that backup that was checked using actual restore procedure. Anything else is just an approximation of this
as devil often is is derails.
Notable quotes:
"... All the tapes were then checked, and they were all ..."
The dangers of not testing your backup procedures and some common pitfalls to
avoid.
Backups. We all know the importance of making a backup of our most important systems.
Unfortunately, some of us also know that realizing the importance of performing backups often
is a lesson learned the hard way. Everyone has their scary backup stories. Here are mine.
Scary Story #1
Like a lot of people, my professional career started out in technical support. In my case, I
was part of a help-desk team for a large professional practice. Among other things, we were
responsible for performing PC LAN backups for a number of systems used by other departments.
For one especially important system, we acquired fancy new tape-backup equipment and a large
collection of tapes. A procedure was put in place, and before-you-go-home-at-night backups
became a standard. Some months later, a crash brought down the system, and all the data was
lost. Shortly thereafter, a call came in for the latest backup tape. It was located and
dispatched, and a recovery was attempted. The recovery failed, however, as the tape was
blank . A call came in for the next-to-last backup tape. Nervously, it was located and
dispatched, and a recovery was attempted. It also failed because this tape also was blank. Amid
long silences and pink-slip glares, panic started to set in as the tape from three nights prior
was called up. This attempt resulted in a lot of shouting.
All the tapes were then checked, and they were all blank. To add insult to injury,
the problem wasn't only that the tapes were blank--they weren't even formatted! The fancy new
backup equipment wasn't smart enough to realize the tapes were not formatted, so it allowed
them to be used. Note: writing good data to an unformatted tape is never a good idea.
Now, don't get me wrong, the backup procedures themselves were good. The problem was that no
one had ever tested the whole process--no one had ever attempted a recovery. Was it no small
wonder then that each recovery failed?
For backups to work, you need to do two things: (1) define and implement a good procedure
and (2) test that it works.
To this day, I can't fathom how my boss (who had overall responsibility for the backup
procedures) managed not to get fired over this incident. And what happened there has always
stayed with me.
A Good Solution
When it comes to doing backups on Linux systems, a number of standard tools can help avoid
the problems discussed above. Marcel Gagné's excellent book (see Resources) contains a
simple yet useful script that not only performs the backup but verifies that things went well.
Then, after each backup, the script sends an e-mail to root detailing what occurred.
I'll run through the guts of a modified version of Marcel's script here, to show you how
easy this process actually is. This bash script starts by defining the location of a log and an
error file. Two mv commands then copy the previous log and error files to allow for the
examination of the next-to-last backup (if required):
With the log and error files ready, a few echo commands append messages (note the use
of >>) to each of the files. The messages include the current date and time (which is
accessed using the back-ticked date command). The cd command then changes to the
location of the directory to be backed up. In this example, that directory is /mnt/data, but it
could be any location:
echo "Starting backup of /mnt/data: `date`." >> $backup_log
echo "Errors reported for backup/verify: `date`." >> $backup_err
cd /mnt/data
The backup then starts, using the tried and true tar command. The -cvf options
request the creation of a new archive (c), verbose mode (v) and the name of the file/device to
backup to (f). In this example, we backup to /dev/st0, the location of an attached SCSI tape
drive:
tar -cvf /dev/st0 . 2>>$backup_err
Any errors produced by this command are sent to STDERR (standard error). The above command
exploits this behaviour by appending anything sent to STDERR to the error file as well (using
the 2>> directive).
When the backup completes, the script then rewinds the tape using the mt command,
before listing the files on the tape with another tar command (the -t option lists the files in
the named archive). This is a simple way of verifying the contents of the tape. As before, we
append any errors reported during this tar command to the error file. Additionally,
informational messages are added to the log file at appropriate times:
mt -f /dev/st0 rewind
echo "Verifying this backup: `date`" >>$backup_log
tar -tvf /dev/st0 2>>$backup_err
echo "Backup complete: `date`" >>$backup_log
To conclude the script, we concatenate the error file to the log file (with cat ),
then e-mail the log file to root (where the -s option to the mail command allows the
specification of an appropriate subject line):
cat $backup_err >> $backup_log
mail -s "Backup status report for /mnt/data" root < $backup_log
And there you have it, Marcel's deceptively simple solution to performing a verified backup
and e-mailing the results to an interested party. If only we'd had something similar all those
years ago.
If you decide to use ``tar'' as your backup solution, you should probably take the time to
get to know the various command-line options that are available; type "
man tar
" for
a comprehensive list. You will also need to know how to access the appropriate backup media;
although all devices are treated like files in the Unix world, if you are writing to a
character device such as a tape, the name of the "file" is the device name itself (eg. ``
/dev/nst0
'' for a SCSI-based tape drive).
The following command will perform a backup of your entire Linux system onto the ``
/archive/
'' file system, with the exception of the ``
/proc/
'' pseudo-filesystem, any mounted file systems in ``
/mnt/
'', the ``
/archive/
'' file system (no sense backing
up our backup sets!), as well as Squid's rather large cache files (which are, in my opinion, a
waste of backup media and unnecessary to back up):
Don't be intimidated by the length of the command above! As we break it down into its
components, you will see the beauty of this powerful utility.
The above command specifies the options ``
z
'' (compress; the backup data will be
compressed with ``gzip''), ``
c
'' (create; an archive file is begin created), ``
v
'' (verbose; display a list of files as they get backed up), ``
p
''
(preserve permissions; file protection information will be "remembered" so they can be
restored). The ``
f
'' (file) option states that the very next argument will be the
name of the archive file (or device) being written. Notice how a filename which contains the
current date is derived, simply by enclosing the ``date'' command between two back-quote
characters. A common naming convention is to add a ``
tar
'' suffix for
non-compressed archives, and a ``
tar.gz
'' suffix for compressed ones.
The ``
--directory
'' option tells tar to first switch to the following directory
path (the ``
/
'' directory in this example) prior to starting the backup.
The ``
--exclude
'' options tell tar not to bother backing up the specified
directories or files. Finally, the ``
.
'' character tells tar that it should back up
everything in the current directory.
Note:
Note: It is important to realize that the options to tar are cAsE-sEnSiTiVe!
In addition, most of the options can be specified as either single mneumonic characters (eg.
``f''), or by their easier-to-memorize full option names (eg. ``file''). The mneumonic
representations are identified by prefixing them with a ``-'' character, while the full names
are prefixed with two such characters. Again, see the "man" pages for information on using
tar.
Another example, this time writing only the specified file systems (as opposed to writing
them
all
with exceptions as demonstrated in the example above) onto a SCSI tape drive
follows:
tar -cvpf /dev/nst0 --label="Backup set created on `date '+%d-%B-%Y'`." \
--directory / --exclude=var/spool/ etc home usr/local var/spool
In the above command, notice that the ``
z
'' (compress) option is not used. I
strongly
recommend against writing compressed data to tape, because if data on a
portion of the tape becomes corrupted, you will lose your entire backup set! However, archive
files stored without compression have a very high recoverability for non-affected files, even
if portions of the tape archive are corrupted.
Because the tape drive is a character device, it is not possible to specify an actual file
name. Therefore, the file name used as an argument to tar is simply the name of the device, ``
/dev/nst0
'', the first tape device on the SCSI bus.
Note:
Note: The ``
/dev/nst0
'' device does not rewind after the
backup set is written; therefore it is possible to write multiple sets on one tape. (You may
also refer to the device as ``
/dev/st0
'', in which case the tape is
automatically rewound after the backup set is written.)
Since we aren't able to specify a filename for the backup set, the ``
--label
''
option can be used to write some information about the backup set into the archive file
itself.
Finally, only the files contained in the ``
/etc/
'', ``
/home/
'', ``
/usr/local
'', and ``
/var/spool/
'' (with the exception of Squid's cache data files) are written
to the tape.
When working with tapes, you can use the following commands to rewind, and then eject your
tape:
mt -f /dev/nst0 rewind
mt -f /dev/nst0 offline
Tip:
Tip: You will notice that leading ``
/
'' (slash) characters are
stripped by tar when an archive file is created. This is tar's default mode of operation, and
it is intended to protect you from overwriting critical files with older versions of those
files, should you mistakenly recover the wrong file(s) in a restore operation. If you really
dislike this behavior (remember, its a
feature
!) you can specify the ``
--absolute-paths
'' option to tar, which will preserve the leading slashes. However,
I don't recommend doing so, as it is
Dangerous
!
Yes, this is completely possible. First and foremost, you will need at least 2 USB ports available,
or 1 USB port and 1 CD-Drive.
You start by booting into a Live-CD version of Ubuntu with your
hard-drive where it is and the target device plugged into USB. Mount your internal drive and target
USB to any paths you like.
Open up a terminal and enter the following commands:
tar cp --xattrs /path/to/internal | tar x /path/to/target/usb
You can also look into doing this through a live installation and a utility called CloneZilla,
but I am unsure of exactly how to use CloneZilla. The above method is what I used to copy my 128GB
hard-drive's installation of Ubuntu to a 64GB flash drive.
2) Clone again the internal or external drive in its entirety to another drive:
Use the "Clonezilla" utility, mentioned in the very last paragraph of my original answer, to
clone the original internal drive to another external drive to make two such external bootable
drives to keep track of. v>
Anyone who has started a terminal in Linux is familiar with the default Bash prompt:
[
user
@
$host
~
]
$
But did you know is that this is completely customizable and can contain some very useful information?
Here are a few hidden treasures you can use to customize your Bash prompt.
How is the Bash prompt set?
The Bash prompt is set by the environment variable PS1 (Prompt String 1), which is used for interactive
shell prompts. There is also a PS2 variable, which is used when more input is required to complete
a Bash command.
[ dneary @ dhcp- 41 - 137 ~ ] $ export PS1 = "[Linux Rulez]$ "
[ Linux Rulez ] export PS2 = "... "
[ Linux Rulez ] if true ; then
... echo "Success!"
... fi
Success ! Where is the value of PS1 set?
PS1 is a regular environment variable.
The system default value is set in /etc/bashrc . On my system, the default prompt is set with
this line:
In the PROMPTING section of man bash , you can find a description of all the special characters
in PS1 and PS2 . The following are the default options:
\u : Username
\h : Short hostname
\W : Basename of the current working directory ( ~ for home, the end of the current directory
elsewhere)
\s : Shell name ( bash or sh , depending on how the shell is called)
\v : The shell's version
What other special strings can I use in the prompts?
There are a number of special strings that can be useful.
\d : Expands to the date in the format "Tue Jun 27"
\D{fmt} : Allows custom date formats!see man strftime for the available options
\D{%c} : Gives the date and time in the current locale
\n : Include a new line (see multi-line prompts below)
\w : The full path of the current working directory
\H : The full hostname for the current machine
\! : History number!you can run any previous command with its history number by using the
shell history event designator ! followed by the number for the specific command you are interested
in. (Using Linux history is yet another tutorial...)
There are many other special characters!you can see the full list in the PROMPTING section of
the Bash man page .
Multi-line prompts
If you use longer prompts (say if you include \H or \w or a full date-time ), you may want to
break things over two lines. Here is an example of a multi-line prompt, with the date, time, and
current working directory on one line, and username @hostname on the second line:
PS1
=
"\D{%c} \w
\n
[\u@\H]$ "
Are there any other interesting things I can do?
One thing people occasionally do is create colorful prompts. While I find them annoying and distracting,
you may like them. For example, to change the date-time above to display in red text, the directory
in cyan, and your username on a yellow background, you could try this:
\e[.. is an escape character. What follows is a special escape sequence to change the color
(or other characteristic) in the terminal
31m is red text ( 41m would be a red background)
36m is cyan text
1;43m declares a yellow background ( 1;33m would be yellow text)
\[\e[0m\] at the end resets the colors to the terminal defaults
You can find more colors and tips in the
Bash prompt HOWTO
. You can even make text inverted or blinking! Why on earth anyone would want to do this, I don't
know. But you can!
When you're in a version-controlled directory, it includes the VCS information (e.g. the git branch
and status), which is really handy if you do development.
Victorhck on 07 Jul 2017
Permalink An easy
drag and drop interface to build your own .bashrc/PS1 configuration
Today,
I have stumbled upon a collection of useful BASH scripts for heavy commandline users. These
scripts, known as Bash-Snippets , might be quite helpful for those who live in Terminal all
day. Want to check the weather of a place where you live? This script will do that for you.
Wondering what is the Stock prices? You can run the script that displays the current details of
a stock. Feel bored? You can watch some youtube videos. All from commandline. You don't need to
install any heavy memory consumable GUI applications.
Bash-Snippets provides the following 12 useful tools:
currency – Currency converter.
stocks – Provides certain Stock details.
weather – Displays weather details of your place.
crypt – Encrypt and decrypt files.
movies – Search and display a movie details.
taste – Recommendation engine that provides three similar items like the supplied
item (The items can be books, music, artists, movies, and games etc).
short – URL Shortner
geo – Provides the details of wan, lan, router, dns, mac, and ip.
cheat –
Provides cheat-sheets for various Linux commands
.
ytview – Watch YouTube from Terminal.
cloudup – A tool to backup your GitHub repositories to bitbucket.
qrify – Turns the given string into a qr code.
Bash-Snippets – A Collection Of Useful BASH Scripts For Heavy Commandline
Users
Installation
You can install these scripts on any OS that supports BASH.
This will ask you which scripts to install. Just type Y and press ENTER key to install the
respective script. If you don't want to install a particular script, type N and hit ENTER.
[Jul 16, 2017] Classifier by classifying them into folders of Xls, Docs, .png, .jpeg, vidoe, music, pdfs, images, ISO, etc.
If i'm
not wrong, all our download folder is pretty Sloppy compare with others because most of the
downloaded files are sitting over there and we can't delete blindly, which leads to lose some
important files. Also not possible to create bunch of folders based on the files and move
appropriate files into folder manually.
So, what to do to avoid this ? Better to organize files with help of classifier, later we
can delete unnecessary files easily. Classifier app was written in Python.
How to Organize directory ? Simple navigate to corresponding directory, where you want to
organize/classify your files and run the
classifier
command, it will take few mins
or more depends on the directory files count or quantity.
Make a note, there is no undo option, if you want to go back. So, finalize before run
classifier in directory. Also, it wont move folders.
Install Classifier in Linux through
pip
pip is a recommended tool for installing Python packages in Linux. Use pip command instead
of package manager to get latest build.
For Debian based systems.
$ sudo apt-get install python-pip
For RHEL/CentOS based systems.
$ sudo yum install python-pip
For Fedora
$ sudo dnf install python-pip
For openSUSE
$ sudo zypper install python-pip
For Arch Linux based systems
$ sudo pacman -S python-pip
Finally run the pip tool to install Classifier on Linux.
$ sudo pip install classifier
Organize pattern files into specific folders
First i will go with default option which will organize pattern files into specific folders.
This will create bunch of directories based on the file types and move them into specific
folders.
See my directory, how its looking now (Before run classifier command).
$ pwd
/home/magi/classifier
$ ls -lh
total 139M
-rw-r--r-- 1 magi magi 4.5M Mar 21 21:21 Aaluma_Doluma.mp3
-rw-r--r-- 1 magi magi 26K Mar 21 21:12 battery-monitor_0.4-xenial_all.deb
-rw-r--r-- 1 magi magi 24K Mar 21 21:12 buku-command-line-bookmark-manager-linux.png
-rw-r--r-- 1 magi magi 0 Mar 21 21:43 config.php
-rw-r--r-- 1 magi magi 25 Mar 21 21:13 core.py
-rw-r--r-- 1 magi magi 101K Mar 21 21:12 drawing.svg
-rw-r--r-- 1 magi magi 86M Mar 21 21:12 go1.8.linux-amd64.tar.gz
-rw-r--r-- 1 magi magi 28 Mar 21 21:13 index.html
-rw-r--r-- 1 magi magi 27 Mar 21 21:13 index.php
-rw-r--r-- 1 magi magi 48M Apr 30 2016 Kabali Tamil Movie _ Official Teaser _ Rajinikanth _ Radhika Apte _ Pa Ranjith-9mdJV5-eias.webm
-rw-r--r-- 1 magi magi 28 Mar 21 21:12 magi1.txt
-rw-r--r-- 1 magi magi 66 Mar 21 21:12 ppa.py
-rw-r--r-- 1 magi magi 1.1K Mar 21 21:12 Release.html
-rw-r--r-- 1 magi magi 45K Mar 21 21:12 v0.4.zip
Navigate to corresponding directory where you want to organize files, then run
classifier
command without any option to achieve it.
$ classifier
Scanning Files
Done!
See the Directory look, after run classifier command
$ ls -lh
total 44K
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Archives
-rw-r--r-- 1 magi magi 0 Mar 21 21:43 config.php
-rw-r--r-- 1 magi magi 25 Mar 21 21:13 core.py
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 DEBPackages
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Documents
-rw-r--r-- 1 magi magi 28 Mar 21 21:13 index.html
-rw-r--r-- 1 magi magi 27 Mar 21 21:13 index.php
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Music
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Pictures
-rw-r--r-- 1 magi magi 66 Mar 21 21:12 ppa.py
-rw-r--r-- 1 magi magi 1.1K Mar 21 21:12 Release.html
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Videos
Make a note, this will organize only general category files such docs, audio, video,
pictures, archive, etc and wont organize .py, .html, .php, etc.,.
Classify specific file
types into specific folder
To Classify specific file types into specific folder, just add
-st
(mention the
file type) &
-sf
(folder name) followed by classifier command.
For best understanding, i'm going to move
.py
,
.html
&
.php
files into
Development
folder. See the exact command to achieve
it.
If the folder doesn't exit, it will create the new one and organize the files into that. See
the following output. It created
Development
directory and moved all the files
inside the directory.
$ ls -lh
total 28K
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Archives
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 DEBPackages
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:51 Development
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Documents
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Music
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Pictures
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Videos
For better clarification, i have listed Development folder files.
$ ls -lh Development/
total 12K
-rw-r--r-- 1 magi magi 0 Mar 21 21:43 config.php
-rw-r--r-- 1 magi magi 25 Mar 21 21:13 core.py
-rw-r--r-- 1 magi magi 28 Mar 21 21:13 index.html
-rw-r--r-- 1 magi magi 27 Mar 21 21:13 index.php
-rw-r--r-- 1 magi magi 0 Mar 21 21:43 ppa.py
-rw-r--r-- 1 magi magi 0 Mar 21 21:43 Release.html
To Organize files by Date. It will organize current directory files based on the date.
Crowdsourcing,
Open Data and Precarious Labour Crowdsourcing and microtransactions are two halves of the
same coin: they both mark new stages in the continuing devaluation of labour. byAllana Mayeron February 24th, 2016 The cultural heritage industries (libraries, archives, museums,
and galleries, often collectively called GLAMs) like to consider themselves the tech industry's
little siblings. We're working to develop things like Linked Open Data, a decentralized network
of collaboratively-improved descriptive metadata; we're building our own open-source tech to
make our catalogues and collections more useful; we're pushing scholarly publishing out from
behind paywalls and into open-access platforms; we're driving innovations in accessible tech.
We're only different in a few ways. One, we're a distinctly
feminized set of professions , which comes with a large set of internally- and
externally-imposed assumptions. Two, we rely very heavily on volunteer labour, and not just in
the
internship-and-exposure vein : often retirees and non-primary wage-earners are the people
we "couldn't do without." Three, the underlying narrative of a "helping" profession !
essentially a social service ! can push us to ignore the first two distinctions, while driving
ourselves to perform more and expect less.
I suppose the major way we're different is that tech doesn't acknowledge us, treat us with
respect, build things for us, or partner with us, unless they need a philanthropic opportunity.
Although, when some ingenue autodidact bootstraps himself up to a billion-dollar IPO, there's a
good chance he's been educating himself using our free resources. Regardless, I imagine a few
of the issues true in GLAMs are also true in tech culture, especially in regards to labour and
how it's compensated.
Here's an example. One of the latest trends is crowdsourcing: admitting we don't have all
the answers, and letting users suggest some metadata for our records. (Not to be confused with
crowdfunding.) The biggest example of this is Flickr Commons: the Library of Congress partnered
with Yahoo! to publish thousands of images that had somehow ended up in the LOC's collection
without identifying information. Flickr users were invited to tag pictures with their own
keywords or suggest descriptions using comments.
Many orphaned works (content whose copyright status is unclear) found their way conclusively
out into the public domain (or back into copyright) this way. Other popular crowdsourcing
models include gamification ,
transcription of handwritten documents (which can't be done with Optical Character
Recognition), or proofreading OCR output on digitized texts. The most-discussed side benefits
of such projects include the PR campaign that raises general awareness about the organization,
and a "lifting of the curtain" on our descriptive mechanisms.
The problem with crowdsourcing is that it's been conclusively provennot to
function in the way we imagine it does: a handful of users end up contributing massive amounts
of labour, while the majority of those signed up might do a few tasks and then disappear. Seven
users in the "Transcribe Bentham" project contributed to 70% of
the manuscripts completed; 10 "power-taggers" did the lion's share of the Flickr
Commons' image-identification work. The function of the distributed digital model of
volunteerism is that those users won't be compensated, even though many came to regard their
accomplishments as full-time jobs .
It's not what you're thinking: many of these contributors already had full-time jobs ,
likely ones that allowed them time to mess around on the Internet during working hours. Many
were subject-matter experts, such as the vintage-machinery hobbyist who
created entire datasets of machine-specific terminology in the form of image tags. (By the way,
we have a cute name for this: "folksonomy," a user-built taxonomy. Nothing like reducing unpaid
labour to a deeply colonial ascription of communalism.) In this way, we don't have precisely
the free-labour-for-exposure/project-experience
problem the tech industry has ; it's not our internships that are the problem. We've moved
past that, treating even our volunteer labour as a series of microtransactions. Nobody's
getting even the dubious benefit of job-shadowing, first-hand looks at business practices, or
networking. We've completely obfuscated our own means of production. People who submit metadata
or transcriptions don't even have a means of seeing how the institution reviews and ingests
their work, and often, to see how their work ultimately benefits the public.
All this really says to me is: we could've hired subject experts to consult, and given them
a living wage to do so, instead of building platforms to dehumanize labour. It also means
our systems rely on privilege , and will undoubtedly contain and promote content with a
privileged bias, as Wikipedia does. (And hey, even Wikipedia contributions can sometimes result
in paid Wikipedian-in-Residence jobs.)
If libraries continue on with their veneer of passive and objective authorities that offer
free access to all knowledge, this underlying bias will continue to propagate subconsciously.
As in
Mechanical Turk , being "slightly more
diverse than we used to be" doesn't get us any points, nor does it assure anyone that our
labour isn't coming from countries with long-exploited workers.
I also want to draw parallels between the free labour of crowdsourcing and the free labour
offered in civic hackathons or open-data contests. Specifically, I'd argue that open-data
projects are less ( but
still definitely ) abusive to their volunteers, because at least those volunteers have a
portfolio object or other deliverable to show for their work. They often work in groups and get
to network, whereas heritage crowdsourcers work in isolation.
There's also the potential for converting open-data projects to something monetizable: for
example, a Toronto-specific bike-route app can easily be reconfigured for other cities and
sold; while the Toronto version stays free under the terms of the civic initiative, freemium
options can be added. The volunteers who supply thousands of transcriptions or tags can't
usually download their own datasets and convert them into something portfolio-worthy, let alone
sellable. Those data are useless without their digital objects, and those digital objects still
belong to the museum or library.
Crowdsourcing and microtransactions are two halves of the same coin: they both mark new
stages in the continuing devaluation of labour, and they both enable misuse and abuse of people
who increasingly find themselves with few alternatives. If we're not offering these people
jobs, reference letters, training, performance reviews, a "foot in the door" (cronyist as that
is), or even acknowledgement by name, what impetus do they have to contribute? As with
Wikipedia, I think the intrinsic motivation for many people to supply us with free labour is
one of two things: either they love being right, or they've been convinced by the feel-good
rhetoric that they're adding to the net good of the world. Of course, trained librarians,
archivists, and museum workers have fallen sway to the
conflation of labour and identity , too, but we expect to be paid for it.
As in tech, stereotypes and PR obfuscate labour in cultural heritage. For tech, an
entrepreneurial spirit and a tendency to buck traditional thinking; for GLAMs, a passion for
public service and opening up access to treasures ancient and modern. Of course, tech
celebrates the autodidactic dropout; in GLAMs, you need a masters. Period. Maybe two. And
entry-level jobs in GLAMs require one or more years of experience, across the board.
When library and archives students go into massive student debt, they're rarely apprised of
the
constant shortfall of funding for government-agency positions, nor do they get told how
much work is done by volunteers (and, consequently, how much of the job is monitoring and
babysitting said volunteers). And they're not trained with enough technological competency to
sysadmin anything , let alone build a platform that pulls crowdsourced data into an
authoritative record. The costs of commissioning these platforms aren't yet being made public,
but I bet paying subject experts for their hourly labour would be cheaper.
Solutions
I've tried my hand at many of the crowdsourcing and gamifying interfaces I'm here to
critique. I've never been caught up in the "passion" ascribed to those super-volunteers who
deliver huge amounts of work. But I can tally up other ways I contribute to this problem: I
volunteer for scholarly tasks such as peer-reviewing, committee work, and travelling on my own
dime to present. I did an unpaid internship without receiving class credit. I've put my
research behind a paywall. I'm complicit in the established practices of the industry, which
sits uneasily between academic and social work: neither of those spheres have ever been
profit-generators, and have always used their codified altruism as ways to finagle more labour
for less money.
It's easy to suggest that we outlaw crowdsourced volunteer work, and outlaw
microtransactions on Fiverr and MTurk, just as the easy answer would be to outlaw Uber and Lyft
for divorcing administration from labour standards. Ideally, we'd make it illegal for
technology to wade between workers and fair compensation.
But that's not going to happen, so we need alternatives. Just as unpaid internships are
being eliminated ad-hoc through corporate pledges, rather than being prohibited
region-by-region, we need pledges from cultural-heritage institutions that they will pay for
labour where possible, and offer concrete incentives to volunteer or intern otherwise. Budgets
may be shrinking, but that's no reason not to compensate people at least through resume and
portfolio entries. The best template we've got so far is the Society of
American Archivists' volunteer best practices , which includes "adequate training and
supervision" provisions, which I interpret to mean outlawing microtransactions entirely. The
Citizen Science
Alliance , similarly, insists on "concrete outcomes" for its crowdsourcing projects, to "
never
waste the time of volunteers ." It's vague, but it's something.
We can boycott and publicly shame those organizations that promote these projects as fun
ways to volunteer, and lobby them to instead seek out subject experts for more significant
collaboration. We've seen a few
efforts to shame job-posters for unicorn requirements and pathetic salaries, but they've
flagged without productive alternatives to blind rage.
There are plenty more band-aid solutions. Groups like Shatter The Ceiling offer cash to women of colour who
take unpaid internships. GLAM-specific internship awards are relativelycommon
, but could: be bigger, focus on diverse applicants who need extra support, and have
eligibility requirements that don't exclude people who most need them (such as part-time
students, who are often working full-time to put themselves through school). Better yet, we can
build a tech platform that enables paid work, or at least meaningful volunteer projects. We
need nationalized or non-profit recruiting systems (a digital "volunteer bureau") that matches
subject experts with the institutions that need their help. One that doesn't take a cut
from every transaction, or reinforce power imbalances, the way Uber does. GLAMs might even find
ways to combine projects, so that one person's work can benefit multiple institutions.
GLAMs could use plenty of other help, too: feedback from UX designers on our catalogue
interfaces, helpful
tools , customization of our vendor platforms, even turning libraries into Tor relays or exits .
The open-source community seems to be looking for ways to contribute meaningful volunteer
labour to grateful non-profits; this would be a good start.
What's most important is that cultural heritage preserves the ostensible benefits of
crowdsourcing – opening our collections and processes up for scrutiny, and admitting the limits of our knowledge –
without the exploitative labour practices. Just like in tech, a few more glimpses behind the
curtain wouldn't go astray. But it would require deeper cultural shifts, not least in the
self-perceptions of GLAM workers: away from overprotective stewards of information, constantly
threatened by dwindling budgets and unfamiliar technologies, and towards facilitators,
participants in the communities whose histories we hold.
Do you sometimes wonder how to use parameters with your scripts, and how to pass them to internal
functions or other scripts? Do you need to do simple validity tests on parameters or options, or
perform simple extraction and replacement operations on the parameter strings? This tip helps you
with parameter use and the various parameter expansions available in the bash shell.
The intelligence community is about to get the equivalent of an adrenaline shot to the chest.
This summer, a $600 million computing cloud developed by Amazon Web Services for the Central Intelligence
Agency over the past year will begin servicing all 17 agencies that make up the intelligence community.
If the technology plays out as officials envision, it will usher in a new era of cooperation and
coordination, allowing agencies to share information and services much more easily and avoid the
kind of intelligence gaps that preceded the Sept. 11, 2001, terrorist attacks.
For the first time, agencies within the intelligence community will be able to order a variety
of on-demand computing and analytic services from the CIA and National Security Agency. What's more,
they'll only pay for what they use.
The vision was first outlined in the Intelligence Community Information Technology Enterprise
plan championed by Director of National Intelligence James Clapper and IC Chief Information Officer
Al Tarasiuk almost three years ago. Cloud computing is one of the core components of the strategy
to help the IC discover, access and share critical information in an era of seemingly infinite data.
For the risk-averse intelligence community, the decision to go with a commercial cloud vendor
is a radical departure from business as usual.
In 2011, while private companies were consolidating data centers in favor of the cloud and some
civilian agencies began flirting with cloud variants like email as a service, a sometimes contentious
debate among the intelligence community's leadership took place.
... ... ...
The government was spending more money on information technology within the IC than ever before.
IT spending reached $8 billion in 2013, according to budget documents leaked by former NSA contractor
Edward Snowden. The CIA and other agencies feasibly could have spent billions of dollars standing
up their own cloud infrastructure without raising many eyebrows in Congress, but the decision to
purchase a single commercial solution came down primarily to two factors.
"What we were really looking at was time to mission and innovation," the former intelligence official
said. "The goal was, 'Can we act like a large enterprise in the corporate world and buy the thing
that we don't have, can we catch up to the commercial cycle? Anybody can build a data center, but
could we purchase something more?
"We decided we needed to buy innovation," the former intelligence official said.
A Groundbreaking Deal
... ... ...
The Amazon-built cloud will operate behind the IC's firewall, or more simply: It's a public cloud
built on private premises.
Intelligence agencies will be able to host applications or order a variety of on-demand services
like storage, computing and analytics. True to the National Institute of Standards and Technology
definition of cloud computing, the IC cloud scales up or down to meet the need.
In that regard, customers will pay only for services they actually use, which is expected to generate
massive savings for the IC.
"We see this as a tremendous opportunity to sharpen our focus and to be very efficient," Wolfe
told an audience at AWS' annual nonprofit and government symposium in Washington. "We hope to get
speed and scale out of the cloud, and a tremendous amount of efficiency in terms of folks traditionally
using IT now using it in a cost-recovery way."
... ... ...
For several years there hasn't been even a close challenger to AWS. Gartner's 2014 quadrant shows
that AWS captures 83 percent of the cloud computing infrastructure market.
In the combined cloud markets for infrastructure and platform services, hybrid and private clouds-worth
a collective $131 billion at the end of 2013-Amazon's revenue grew 67 percent in the first quarter
of 2014, according to Gartner.
While the public sector hasn't been as quick to capitalize on cloud computing as the private sector,
government spending on cloud technologies is beginning to jump.
Researchers at IDC estimate federal private cloud spending will reach $1.7 billion in 2014, and
$7.7 billion by 2017. In other industries, software services are considered the leading cloud technology,
but in the government that honor goes to infrastructure services, which IDC expects to reach $5.4
billion in 2017.
In addition to its $600 million deal with the CIA, Amazon Web Services also does business with
NASA, the Food and Drug Administration and the Centers for Disease Control and Prevention. Most recently,
the Obama Administration tapped AWS to host portions of HealthCare.gov.
Portable Batch System (PBS) is a software which is used in cluster computing to schedule jobs
on multiple nodes. PBS was started as contract project by NASA. PBS is available in three different
versions as below 1) Torque: Terascale Open-source Resource and QUEue Manager (Torque) is developed
from OpenPBS. It is developed and maintain by Adaptive Computing Enterprises. It is used as a distributed
resource manager can perform well when integrated with Maui cluster scheduler to improve performance.
2) PBS Professional (PBS Pro): It is commercial version of PBS offered by Altair Engineering. 3)
OpenPBS: It is open source version released in 1998 developed by NASA. It is not actively developed.
In this article we are going to concentrate on tutorial of PBS Pro it is similar to some extent
with Torque.
PBS contain three basic units server, MoM (execution host), scheduler.
Server: It is heart of the PBS, with executable named "pbs_server". It uses IP network
to communicate with the MoMs. PBS server create a batch job, modify the job requested from different
MoMs. It keeps track of all resources available, assigned in the PBS complex from different MoMs.
It will also monitor the PBS license for jobs. If your license expires it will throw an error.
Scheduler: PBS scheduler uses various algorithms to decide when job should get executed
on which node or vnode by using detail of resources available from server. It has executable as "pbs_sched".
MoM: MoM is the mother of all execution job with executable "pbs_mom". When MoM gets
job from server it will actually execute that job on the host. Each node must have MoM running to
get participate in execution.
Installation and Setting up of environment (cluster with multiple nodes)
Extract compressed software of PBS Pro and go the path of extracted folder it contain "INSTALL"
file, make that file executable you may use command like "chmod +x ./INSTALL". As shown in the image
below run this executable. It will ask for the "execution directory" where you want to store the
executable (such as qsub, pbsnodes, qdel etc.) used for different PBS operations and "home directory"
which contain different configuration files. Keep both as default for simplicity. There are three
kind of installation available as shown in figure:
1) Server node: PBS server, scheduler, MoM and commands are installed on this node. PBS server
will keep track of all execution MoMs present in the cluster. It will schedule jobs on this execution
nodes. As MoM and commands are also installed on server node it can be used to submit and execute
the jobs. 2) Execution node: This type installs MoM and commands. This nodes are added as available
nodes for execution in a cluster. They are also allowed to submit the jobs at server side with specific
permission by server as we are going to see below. They are not involved in scheduling. This kind
of installation ask for PBS server which is used to submit jobs, get status of jobs etc. 3 ) Client
node: This are the nodes which are only allowed to submit a PBS job at server with specific permission
by the server and allowed to see the status of the jobs. They are not involved in execution or scheduling.
Creating vnode in PBS Pro:
We can create multiple vnodes in a single node which contain some part of resources in a node.
We can execute job on this vnodes with specified allocated resources. We can create vnode using qmgr command which is command line interface to PBS server. We can use command given below
to create vnode using qmgr.
The command above will create two vnodes named Vnode1 and Vnode2 with 8 cpus cores, 10gb of memory
and 1 GPU with sharing mode as default_excl which means this vnode can execute exclusively
only one job at a time independent of number of resources free. This sharing mode can be default_shared
which means any number of jobs can run on that vnode until all resources are busy. To know more
about all attributes which can be used with vnode creation are available in PBS Pro reference guide.
You can also create a file in " /var/spool/PBS/mom_priv/config.d/ " this folder with any
name you want I prefer hostname -vnode with sample given below. It will select all files even
temporary files with (~) and replace configuration for same vnode so delete unnecessary files to
get proper configuration of vnodes.
Here in this example we assigned default node configuration to resource available as 0 because
by default it will detect and allocate all available resources to default node with sharing attribute
as is default_shared.
Which cause problem as all the jobs will by default get scheduled on
that default vnode because its sharing type is default_shared . If you want to schedule jobs
on your customized vnodes you should allocate resources available as 0 on default vnode
. Every time whenever you restart the PBS server
PBS get status:
get status of Jobs:
qstat will give details about jobs there states etc.
useful options:
To print detail about all jobs which are running or in hold state: qstat -a
To print detail about subjobs in JobArray which are running or in hold state: qstat -ta
get status of PBS nodes and vnodes:
"pbsnode -a" command will provide list of all nodes present in PBS complex with
there resources available, assigned, status etc.
To get details of all nodes and vnodes you created use " pbsnodes -av" command.
You can also specify node or vnode name to get detail information of that specific node or vnode.
e.g.
pbsnodes wolverine (here wolverine is hostname of the node in PBS complex which is mapped
with IP address in /etc/hosts file)
Job submission (qsub):
PBS MoM will submit jobs to the PBS server. Server maintain queue of jobs by default all jobs
are submitted to default queue named "workq". You may create multiple queues by using "qmgr" command
which is administrator interface mainly used to create, delete & modify queues and vnodes. PBS server
will decide which job to be scheduled on which node or vnode based on scheduling policy and privileges
set by user. To schedule jobs server will continuously ping to all MoMs in the PBS complex to get
detail of resources available and assigned. PBS assigns unique job identifier to each and every job
called JobID. For job submission PBS uses "qsub" command. It has syntax as shown below qsub script
Here script may be a shell (sh, csh, tchs, ksh, bash) script. PBS by default uses /bin/sh. You
may refer simple script given below #!/bin/sh
echo "This is PBS job"
When PBS completes execution of job it will store errors in file name with JobName.e{JobID}
e.g. Job1.e1492
Output with file name
JobName.o{JobID} e.g. Job1.o1492
By default it will store this files in the current working directory (can be seen by pwd
command) . You can change this location by giving path with -o option.
you may specify job name with -N option while submitting the job
qsub -N firstJob ./test.sh
If you don't specify job name it will store files by replacing JobName with script name. e.g.
qsub ./test.sh this command will store results in file with test.sh.e1493 and
test.sh.o.1493 in current working directory.
OR
qsub -N firstJob -o /home/user1/ ./test.sh this command will store results in file with
test.sh.e1493 and test.sh.o.1493 in /home/user1/ directory.
If submitted job terminate abnormally (errors in job is not abnormal, this errors get stored
in JobName.e{JobID} file) it will store its error and output files in "/var/spool/PBS/undelivered/ " folder.
In some cases you may require job which should run after successful or unsuccessful completion
of some specified jobs for that PBS provide some options such as
This specified job will start only after successful completion of job with job ID "316.megamind".
Like afterok PBS has other options such as beforeok
beforenotok to , afternotok. You may find all this details in the man page of qsub
.
Submit Job with priority :
There are two ways using which we can set priority to jobs which are going to execute.
1) Using single queue with different jobs with different priority:
To change sequence of jobs queued in a execution queue open "$PBS_HOME/sched_priv/sched_config"
file, normally $PBS_HOME is present in "/var/spool/PBS/" folder. Open this file
and uncomment the line below if present otherwise add it .
job_sort_key : "job_priority HIGH"
After saving this file you will need to restart the pbs_sched daemon on head node you may use
command below
service pbs restart
After completing this task you have to submit the job with -p option to specify priority
of job within queue. This value may range between (-1024) to 1023, where -1024 is the lowest priority
and 1023 is the highest priority in the queue.
In this case PBS will execute jobs as explain in the diagram given below
2) Using different queues with specified priority: We are going to discuss this point in PBS Queue
section.
In this example all jobs in queue 2 will complete first then queue 3 then queue 1, since priority
of queue 2 > queue 3 > queue 1. Because of this job execution flow is as shown below
PBS Pro can manage multiple queue as per users requirement. By default every job is queued in
"workq" for execution. There are two types of queue are available execution and routing queue. Jobs
in execution queue are used by pbs server for execution. Jobs in routing queue can not be executed
they can be redirected to execution queue or another routing queue by using command qmove
command. By default queue "workq" is an execution queue. The sequence of job in queue may change
by using priority defined while job submission as specified above in job submission section.
Useful qmgr commands:
First type qmgr which is Manager interface of PBS Pro.
To create queue:
Qmgr:
create queue test2
To set type of queue you created:
Qmgr:
set queue test2 queue_type=execution
OR
Qmgr:
set queue test2 queue_type=route
To enable queue:
Qmgr:
set queue test2 enabled=True
To set priority of queue:
Qmgr:
set queue test2 priority=50
Jobs in queue with higher priority will get first preference. After completion of all jobs in
the queue with higher priority jobs in lower priority queue are scheduled. There is huge probability
of job starvation in queue with lower priority.
To start queue:
Qmgr:
set queue test2 started = True
To activate all queue (present at particular node):
Qmgr:
active queue @default
To set queue for specified users : You require to set acl_user_enable attribute to true which
indicate PBS to only allow user present in acl_users list to submit the job.
Qmgr:
set queue test2 acl_user_enable=True
To set users permitted (to submit job in a queue):
(in place of .. you have to specify hostname of compute node in PBS complex. Only user name without
hostname will allow users ( with same name ) to submit job from all nodes (
permitted to submit job ) in a PBS Complex).
To delete queues we created:
Qmgr:
delete queue test2
To see details of all queue status:
qstat -Q
You may specify specific queue name: qstat -Q test2
To see full details of all queue: qstat -Q -f
You may specify specific queue name: qstat -Q -f test2
About conditional, substring, and substitution parameter expansion
operators
Conditional parameter expansion
Conditional parameter expansion allows branching on whether the
parameter is unset, empty, or has content. Based on these conditions,
the parameter can be expanded to its value, a default value, or an
alternate value; throw a customizable error; or reassign the parameter
to a default value. The following table shows the conditional
parameter expansions-each row shows a parameter expansion using an
operator to potentially modify the expansion, with the columns showing
the result of that expansion given the parameter's status as indicated
in the column headers. Operators with the
':'
prefix
treat parameters with empty values as if they were unset.
parameter expansion
unset var
var=""
var="gnu"
${var-default}
default
-
gnu
${var:-default}
default
default
gnu
${var+alternate}
-
alternate
alternate
${var:+alternate}
-
-
alternate
${var?error}
error
-
gnu
${var:?error}
error
error
gnu
The
=
and
:=
operators in the table function
identically to
-
and
:-
, respectively, except that the
=
variants rebind the variable to the result of the expansion.
As an example, let's try opening a user's editor on a file
specified by the
OUT_FILE
variable. If either the
EDITOR
environment variable or our
OUT_FILE
variable is not specified,
we will have a problem. Using a conditional expansion, we can ensure
that when the
EDITOR
variable is expanded, we get the specified
value or at least a sane default:
Parameters can be expanded to just part of their contents, either
by offset or by removing content matching a pattern. When specifying a
substring offset, a length may optionally be specified. If running
Bash version 4.2 or greater, negative numbers may be used as offsets
from the end of the string. Note the parentheses used around the
negative offset, which ensure that Bash does not parse the expansion
as having the conditional default expansion operator from above:
$
location
=
"
CA 90095
"
$
echo
"
Zip Code:
${
location
:
3
}
"
Zip Code: 90095
$
echo
"
Zip Code:
${
location
:
(-5)
}
"
Zip Code: 90095
$
echo
"
State:
${
location
:
0
:
2
}
"
State: CA
Another way to take a substring is to remove characters from the
string matching a pattern, either from the left edge with the
#
and
##
operators or from the right edge with the
%
and
%%
operators. A useful mnemonic is that
#
appears left
of a comment and
%
appears right of a number. When the operator
is doubled, it matches greedily, as opposed to the single version,
which removes the most minimal set of characters matching the pattern.
var="open source"
parameter expansion
offset of 5
length of 4
${var:offset}
source
${var:offset:length}
sour
pattern of *o?
${var#pattern}
en source
${var##pattern}
rce
pattern of ?e*
${var%pattern}
open sour
${var%%pattern}
o
The pattern-matching used is the same as with filename globbing:
*
matches zero or more of any character,
?
matches exactly
one of any character,
[...]
brackets introduce a character
class match against a single character, supporting negation (
^
),
as well as the posix character classes, e.g. . By excising characters
from our string in this manner, we can take a substring without first
knowing the offset of the data we need:
The same types of patterns are used for substitution in parameter
expansion. Substitution is introduced with the
/
or
//
operators, followed by two arguments separated by another
/
representing the pattern and the string to substitute. The
pattern matching is always greedy, so the doubled version of the
operator, in this case, causes all matches of the pattern to be
replaced in the variable's expansion, while the singleton version
replaces only the leftmost.
var="free and open"
parameter expansion
pattern of
string of _
${var/pattern/string}
free_and open
${var//pattern/string}
free_and_open
The wealth of parameter expansion modifiers transforms Bash
variables and other parameters into powerful tools beyond simple value
stores. At the very least, it is important to understand how parameter
expansion works when reading Bash scripts, but I suspect that not
unlike myself, many of you will enjoy the conciseness and
expressiveness that these expansion modifiers bring to your scripts as
well as your interactive sessions.
...At Duke University's Office of Information Technology (OIT), we began looking at containers as
a way to achieve higher density from the virtualized infrastructure used to host websites. Virtual
machine (VM) sprawl had started to become a problem. We favored separating each client's website
onto its own VM for both segregation and organization, but steady growth meant we were managing more
servers than we could handle. As we looked for ways to lower management overhead and make better
use of resources, Docker hit the news, and we began to experiment with
containerization
for our web applications.
For us, the initial investigation of containers mirrors a shift toward a DevOps culture.
Where we started
When we first looked into container technology, OIT was highly process driven and composed of
monolithic applications and a monolithic organizational structure. Some early forays into automation
were beginning to lead the shift toward a new cultural organization inside the department, but even
so, the vast majority of our infrastructure consisted of "pet" servers (to use the
pets
vs. cattle analogy). Developers created their applications on staging servers designed to match
production hosting environments and deployed by migrating code from the former to the latter. Operations
still approached hosting as it always had: creating dedicated VMs for individual services and filing
manual tickets for monitoring and backups. A service's lifecycle was marked by change requests, review
boards, standard maintenance windows, and lots of personal attention.
A shift in culture
As we began to embrace containers, some of these longstanding attitudes toward development and
hosting began to shift a bit. Two of the larger container success stories came from our investigation
into cloud infrastructure. The first project was created to host hundreds of R-Studio containers
for student classes on Microsoft Azure hosts, breaking from our existing model of individually managed
servers and moving toward "cattle"-style infrastructure designed for hosting containerized applications.
The other was a rapid containerization and deployment of the Duke website to Amazon Web Services
while in the midst of a denial-of-service attack, dynamically creating infrastructure and rapidly
deploying services.
The success of these two wildly nonstandard projects helped to legitimize containers within the
department, and more time and effort was put into looking further into their benefits and those of
on-demand and disposable cloud infrastructure, both on-premises and through public cloud providers.
It became apparent early on that containers lived within a different timescale from traditional
infrastructure. We started to notice cases where short-lived, single-purpose services were created,
deployed, lived their entire lifecycle, and were decommissioned before we completed the tickets created
to enter them into inventory, monitoring, or backups. Our policies and procedures were not able to
keep up with the timescales that accompanied container development and deployment.
In addition, humans couldn't keep up with the automation that went into creating and managing
the containers on our hosts. In response, we began to develop more automation to accomplish usually
human-gated processes. For example, the dynamic migration of containers from one host to another
required a change in our approach to monitoring. It is no longer enough to tie host and service monitoring
together or to submit a ticket manually, as containers are automatically destroyed and recreated
on other hosts in response to events.
Some of this was in the works for us already-automation and container adoption seem to parallel
one another. At some point, they become inextricably intertwined.
As containers continued to grow in popularity and OIT began to develop tools for container orchestration,
we tried to further reinforce the "cattle not pets" approach to infrastructure. We limited login
of the hosts to operations staff only (breaking with tradition) and gave all hosts destined for container
hosting a generic name. Similar to being coached to avoid naming a stray animal in an effort to prevent
attachment, servers with generic names became literally forgettable. Management of the infrastructure
itself became the responsibility of automation, not humans, and humans focused their efforts on the
services inside the containers.
Containers also helped to usher continuous integration into our everyday workflows. OIT's Identity
Management team members were early adopters and began to build Kerberos key distribution centers
(KDCs) inside containers using Jenkins, building regularly to incorporate patches and test the resulting
images. This allowed the team to catch breaking builds before they were pushed out onto production
servers. Prior to that, the complexity of the environment and the widespread impact of an outage
made patching the systems a difficult task.
Embracing continuous deployment
Since that initial use case, we've also embraced continuous deployment. There is a solid pattern
for every project that gets involved with our continuous integration/continuous deployment (CI/CD)
system. Many teams initially have a lot of hesitation about automatically deploying when tests pass,
and they tend to build checkpoints requiring human intervention. However, as they become more comfortable
with the system and learn how to write good tests, they almost always remove these checkpoints.
Within our container orchestration automation, we use Jenkins to patch base images on a regular
basis and rebuild all the child images when the parent changes. We made the decision early that the
images could be rebuilt and redeployed at any time by automated processes. This meant that any code
included in the branch of the git repository used in the build job would be included in the image
and potentially deployed without any humans involved. While some developers initially were uncomfortable
with this, it ultimately led to better development practices: Developers merge into the production
branch only code that is truly ready to be deployed.
This practice facilitated rebuilding container images immediately when code is merged into the
production branch and allows us to automatically deploy the new image once it's built. At this point,
almost every project using the automatic rebuild has also enabled automated deployment.
Looking ahead
Today the adoption of both containers and DevOps is still a work in progress for OIT.
Internally we still have to fight the entropy of history even as we adopt new tools and culture.
Our biggest challenge will be convincing people to break away from the repetitive break-fix
mentality that currently dominates their jobs and to focus more on automation. While time is
always short, and the first step always daunting, in the long run adopting automation for day-to-day
tasks will free them to work on more interesting and complex projects.
Thankfully, people within the organization are starting to embrace working in organized or ad
hoc groups of cross-discipline members and developing automation together. This will definitely become
necessary as we embrace automated orchestration and complex systems. A group of talented individuals
who possess complementary skills will be required to fully manage the new environments.
Amazon's S3 web-based storage service is
experiencing widespread issues, leading to service that's either
partially or fully broken on websites, apps and devices upon which it
relies. The AWS offering provides hosting for images for a lot of sites,
and also hosts entire websites, and app backends including Nest.
The S3 outage is due to "high error rates with S3 in US-EAST-1,"
according to
Amazon's AWS service health dashboard
, which is where the company
also says it's working on "remediating the issue," without initially
revealing any further details.
Affected websites and services include Quora, newsletter provider
Sailthru, Business Insider, Giphy, image hosting at a number of publisher
websites, filesharing in Slack, and many more. Connected lightbulbs,
thermostats and other IoT hardware is also being impacted, with many
unable to control these devices as a result of the outage.
Amazon S3 is used by around 148,213 websites, and 121,761 unique
domains, according to data tracked by
SimilarTech
, and its popularity as a content host concentrates
specifically in the U.S. It's used by 0.8 percent of the top 1 million
websites, which is actually quite a bit smaller than CloudFlare, which is
used by 6.2 percent of the top 1 million websites globally – and yet it's
still having this much of an effect.
Amazingly, even the status indicators on the AWS service status page
rely on S3 for storage of its health marker graphics, hence why the site
is still showing all services green despite obvious evidence to the
contrary.
Update (11:40 AM PT):
AWS has fixed the issues with its
own dashboard at least – it'll now
accurately reflect service status as it continues to attempt to fix the
problem
.
Update (11:57 AM PT):
AWS says it
believes they new "understand root cause" of the S3 issues, and are
"working hard at repairing." It has not shared specifics of that cause.
Update (12:15 PM PT):
Network intelligence software
provider
ThousandEyes
notes that all the packet loss for the ongoing issue
appears to be happening in the Ashburn, VA area. Amazon has an AWS data
center in Ashburn, whose
exact location was revealed in a news story last year
due to a fire
during its construction.
Update (12:54 PM PT):
AWS says it's seeing "recovery
for S3 object retrievals, listing and deletions" which means you're
probably seeing avatars and other visuals assets come back in some spots.
The company also says it expects further improvements to error rates
within the next hour.
Update (1:20 PM PT):
S3 is now fully recovered in
terms of the retrieval, listing and deletion of existing objects,
according to the AWS status page, and it's now working on restoring
normal operation for the addition of new items to S3-based storage.
Update (2:10 PM PT):
AWS says that it's now fully
recovered in terms of resolving the error rates it was seeing, and S3
service is now "operating normally."
What's new in this release (see below for details):
- - TCP and UDP connection support in WebServices.
- - Various shader improvements for Direct3D 11.
- - Improved support for high DPI settings.
- - Partial reimplementation of the GLU library.
- - Support for recent versions of OSMesa.
- - Window management improvements on macOS.
+ - Direct3D command stream runs asynchronously.
+ - Better serial and parallel ports autodetection.
+ - Still more fixes for high DPI settings.
+ - System tray notifications on macOS.
- Various bug fixes.
... improved support for
Warhammer 40,000: Dawn of War III that'll be ported to Linux and SteamOS platforms by Feral Interactive on June 8, Wine 2.9
is here to introduce support for tesselation shaders in Direct3D, binary mode support in WebServices, RegEdit UI improvements,
and clipboard changes detected through Xfixes.
...
The Wine 2.9 source tarball can be downloaded
right now from our website if you fancy compiling it on your favorite GNU/Linux distribution, but please try to keep in mind
that this is a pre-release version not suitable for production use. We recommend installing the stable Wine branch if you want to
have a reliable and bug-free experience.
Wine 2.9 will also be installable from the software repos of your operating system in the coming days.
"... Baker correctly diagnoses the impact of boomers aging, but there is another effect - "knowledge work" and "high skill manufacturing" is more easily outsourced/offshored than work requiring a physical presence. ..."
Baker correctly diagnoses the impact of boomers aging, but
there is another effect - "knowledge work" and "high skill
manufacturing" is more easily outsourced/offshored than
work requiring a physical presence.
Also outsourcing
"higher wage" work is more profitable than outsourcing
"lower wage" work - with lower wages also labor cost as a
proportion of total cost tends to be lower (not always).
And outsourcing and geographically relocating work
creates other overhead costs that are not much related to
the wages of the local work replaced - and those overheads
are larger in relation to lower wages than in relation to
higher wages.
libezkova -> cm... May 20, 2017 at 08:34 PM
"Also outsourcing "higher wage" work is more profitable than outsourcing "lower wage" work"
"... All of the hype around software and developers, which tends to significantly skew even the
DevOps discussion, runs the risk of creating the perception that IT ops is just a necessary evil. Indeed,
some go so far as to make the case for a 'NoOps' world in which the public cloud magically takes care
of everything downstream once developers have 'innovated' and 'created'. ..."
"... This kind of view comes about from people looking through the wrong end of the telescope. Turn
the thing around and look up close at what goes on in the world of ops, and you get a much better sense
of perspective. Teams operating in this space are not just there to deploy the next custom software
release and make sure it runs quickly and robustly - in fact that's often a relatively small part of
what they do. ..."
"... And coming back to operations, you are sadly mistaken if you think that the public cloud makes
all challenges and requirements go away. If anything, the piecemeal adoption of cloud services has made
things more complex and unpredictable from an integration and management perspective. ..."
"... There are all kinds of valid reasons to keep an application sitting on your own infrastructure
anyway - regulation, performance, proximity to dependent solutions and data, etc. Then let's not forget
the simple fact that running things in the cloud is often more expensive over the longer term. ..."
Get real – it's not all about developers and DevOps
Listen to some DevOps evangelists talk, and you would get the impression that IT operations teams
exist only to serve the needs of developers. Don't get me wrong, software development is a good competence
to have in-house if your organisation depends on custom applications and services to differentiate
its business.
As an ex-developer, I appreciate the value of being able to deliver something tailored to a specific
need, even if it does pain me to see the shortcuts too often taken nowadays due to ignorance of some
of the old disciplines, or an obsession with time-to-market above all else.
But before this degenerates into an 'old guy' rant about 'youngsters today', let's get back to
the point that I really want to make.
All of the hype around software and developers, which tends to significantly skew even the
DevOps discussion, runs the risk of creating the perception that IT ops is just a necessary evil.
Indeed, some go so far as to make the case for a 'NoOps' world in which the public cloud magically
takes care of everything downstream once developers have 'innovated' and 'created'.
This kind of view comes about from people looking through the wrong end of the telescope.
Turn the thing around and look up close at what goes on in the world of ops, and you get a much better
sense of perspective. Teams operating in this space are not just there to deploy the next custom
software release and make sure it runs quickly and robustly - in fact that's often a relatively small
part of what they do.
This becomes obvious when you recognize how much stuff runs in an Enterprise IT landscape - software
packages enabling core business processes, messaging, collaboration and workflow platforms keeping
information flowing, analytics environments generating critical business insights, and desktop and
mobile estates serving end user access needs - to name but a few.
Vital operations
There's then everything required to deal with security, data protection, compliance and other
aspects of risk. Apart from the odd bit of integration and tailoring work - the need for which is
diminishing with modern 'soft-coded', connector-driven solutions - very little of all this has anything
to do with development and developers.
A big part of the rationale for modernising your application landscape and migrating to the latest
flexible and open software packages and platforms is to eradicate the need for coding wherever you
can. Code is expensive to build and maintain, and the same can often be achieved today through software
switches, policy-driven workflow, drag-and-drop interface design, and so on. Sensible IT teams only
code when they absolutely have to.
And coming back to operations, you are sadly mistaken if you think that the public cloud makes
all challenges and requirements go away. If anything, the piecemeal adoption of cloud services has
made things more complex and unpredictable from an integration and management perspective.
There are all kinds of valid reasons to keep an application sitting on your own infrastructure
anyway - regulation, performance, proximity to dependent solutions and data, etc. Then let's not
forget the simple fact that running things in the cloud is often more expensive over the longer term.
Against this background, an 'appropriate' level of custom development and the selective use of
cloud services will be the way forward for most organisations, all underpinned by a well-run data
centre environment acting as the hub for hybrid delivery. This is the approach that tends to be taken
by the most successful enterprise IT teams, and the element that makes particularly high achievers
stand out is agile and effective IT operations.
This isn't just to support any DevOps agenda you might have; it is demonstrably a key enabler
across the board. Of course if you work in operations, you will know already intuitively know all
this. But if you want some ammunition to spell it out to others who need enlightenment, take a look
at our research report entitled
IT Ops and a Digital
Business Enabler; more than just keeping the lights on . This is based on input from 400 Senior
European IT professionals. ®
I think this is one fad that has run its course. If nothing else, the one thing that cloud has
brought to the software world is the separation of software from the environment it runs in, and
since the the Ops side of DevOps is all about the integration of the platform and software, what
you end up with in a cloudy world is a lot of people looking for a new job.
For decades developers have been ignored by infrastructure vendors because the decision makers
buying infrastructure sit in the infrastructure teams. Now with the cloud etc vendors realize
they will lose supporters within these teams.
So instead - infrastructure vendors target developers to become their next fanboys.
E.g. Dear developer, you won't need to speak to your infrastructure admins anymore to setup
a development environment. Now you can automate, orchestrate the provisioning of your containerized
development environment at the push of a button. Blah blah blah, but you have to buy our storage.
I remember the days when every DBA wanted RAID10 just because thats what the whitepaper recommended.
By that time storage technology had long moved on, but the DBA still talked about Full Stripe
Writes.
Now with DevOps you'll have Developers influencing infrastructure decisions, because they just
learned about snapshots. And yes - it has to be all flash - and designed from the ground up by
millenials that eat avocado.
Re: DevOps was never supposed to replace Operations
Yes, DevOps isn't about replacing Ops. But try telling that to the powers that be. It is sold
and seen as a cost cutting measure.
As for devs learning Ops and vice versa, there are very few on both sides who really understand
what it takes to do the others job. I have a very high regard for Devs, but when it comes to infra,
they are, as a whole, very incompetent. Just like I'm incompetent in Dev. can't have one without
the other. I feel that in time, the pendulum will swing away from cloud as execs and accountants
realize how it isn't really saving any money.
The real question is: Will there be any qualified operations engineers available or will they
all have retired out or have found work elsewhere. It isn't easy to be an ops engineer, takes
a lot of experience to get there, and qualified candidates are hard to come by. Let's face it,
in today's world, its a dying breed.
Nice of you to point out what us in Ops have known all along. I'm afraid it will fall on deaf
ears, though. Until the executives who constantly fall for the new shiny are made to actually
examine business needs and processes and make business decisions based on said.
Our laughable move to cloud here involved migrating off of on prem Exchange to O365. The idea
was to free up our operations team to allow us to do more in house projects. Funny thing is, it
takes more management of the service than we ever did on premises. True, we aren't maintaining
the Exchange infra, but now we have SQL servers, DCs, ADFS, etc, to maintain in the MS cloud to
allow authentication just to use the product. And because mail and messaging is business critical,
we have to have geographically disparate instances of both. And the cost isn't pretty. Yay cloud.
Linus Torvalds believes the technology industry's celebration of innovation is smug, self-congratulatory,
and self-serving. The term of art he used was more blunt: "The innovation the industry talks about
so much is bullshit," he said. "Anybody can innovate. Don't do this big 'think different'... screw
that. It's meaningless.
In a deferential interview at the
Open Source Leadership Summit in California on Wednesday, conducted by Jim Zemlin, executive
director of the Linux Foundation, Torvalds discussed how he has managed the development of the Linux
kernel and his attitude toward work.
"All that hype is not where the real work is," said Torvalds. "The real work is in the details."
Torvalds said he subscribes to the view that successful projects are 99 per cent perspiration,
and one per cent innovation.
As the creator and benevolent dictator of the
open-source Linux kernel , not to mention the
inventor of the Git distributed version control system, Torvalds has demonstrated that his approach
produces results. It's difficult to overstate the impact that Linux has had on the technology industry.
Linux is the dominant operating system for servers. Almost all high-performance computing runs on
Linux. And the majority of mobile devices and embedded devices rely on Linux under the hood.
The Linux kernel is perhaps the most successful collaborative technology project of the PC era.
Kernel contributors, totaling more than 13,500 since 2005, are adding about 10,000 lines of code,
removing 8,000, and modifying between 1,500 and 1,800 daily, according to Zemlin. And this has been
going on – though not at the current pace – for more than two and a half decades.
"We've been doing this for 25 years and one of the constant issues we've had is people stepping
on each other's toes," said Torvalds. "So for all of that history what we've done is organize the
code, organize the flow of code, [and] organize our maintainership so the pain point – which is people
disagreeing about a piece of code – basically goes away."
The project is structured so people can work independently, Torvalds explained. "We've been able
to really modularize the code and development model so we can do a lot in parallel," he said.
Technology plays an obvious role but process is at least as important, according to Torvalds.
"It's a social project," said Torvalds. "It's about technology and the technology is what makes
people able to agree on issues, because ... there's usually a fairly clear right and wrong."
But now that Torvalds isn't personally reviewing every change as he did 20 years ago, he relies
on a social network of contributors. "It's the social network and the trust," he said. "...and we
have a very strong network. That's why we can have a thousand people involved in every release."
The emphasis on trust explains the difficulty of becoming involved in kernel development, because
people can't sign on, submit code, and disappear. "You shoot off a lot of small patches until the
point where the maintainers trust you, and at that point you become more than just a guy who sends
patches, you become part of the network of trust," said Torvalds.
Ten years ago, Torvalds said he told other kernel contributors that he wanted to have an eight-week
release schedule, instead of a release cycle that could drag on for years. The kernel developers
managed to reduce their release cycle to around two and half months. And since then, development
has continued without much fuss.
"It's almost boring how well our process works," Torvalds said. "All the really stressful times
for me have been about process. They haven't been about code. When code doesn't work, that can actually
be exciting ... Process problems are a pain in the ass. You never, ever want to have process problems
... That's when people start getting really angry at each other." ®
"... Most of us use some form of desired state solution already. Desired state solutions basically
involve an OS agent that gets a config from a centralized location and applies the relevant configuration
to the operating system and/or applications. ..."
12 May 2017 at 14:56, Trevor
Pott Infrastructure as code is a buzzword frequently thrown out alongside DevOps and continuous
integration as being the modern way of doing things. Proponents cite benefits ranging from an amorphous
"agility" to reducing the time to deploy new workloads. I have an argument for infrastructure as
code that boils down to "cover your ass", and have discovered it's not quite so difficult as we might
think.
... ... ...
None of this is particularly surprising. When you have an environment where each workload is
a
pet , change is slow, difficult, and requires a lot of testing. Reverting changes is equally
tedious, and so a lot of planning goes into making sure than any given change won't cascade and cause
knock-on effects elsewhere.
In the real world this is really the result of two unfortunate aspects of human nature. First:
everyone hates doing documentation, so it's highly unlikely that in an unstructured environment every
change from the last refresh was documented. The second driver of chaos and problems is that there
are few things more permanent than a temporary fix.
When you don't have the budget for the right hardware, software or services you make do. When
something doesn't work you "innovate" a solution. When that breaks something, you patch it. You move
from one problem to the next, and if you're not careful, you end up with something so fragile that
if you breathe on it, it falls over. At this point, you burn it all down and restart from scratch.
This approach to IT is fine - if you have 5, 10 or even 50 workloads. A single techie can reasonably
be expected to keep that all in their head, know their network and solve any problems they encounter.
Unfortunately, 50 workloads is today restricted to only the smallest of shops. Everyone else is juggling
too many workloads to be playing the pets game any more.
Most of us use some form of desired state solution already. Desired state solutions basically
involve an OS agent that gets a config from a centralized location and applies the relevant configuration
to the operating system and/or applications. Microsoft's group policy can be considered a really
primitive version of this, with System Center being a more powerful but miserable to use example.
The modern friendly tools being Puppet, Chef, Saltstack, Ansible and the like.
Once you have desired state configs in place we're no longer beating individual workloads into
shape, or checking them manually for deviation from design. If all does what it says on the tin,
configurations are applied and errors thrown if they can't be. Usually there is some form of analysis
software to determine how many of what is out of compliance. This is a big step forward.
The Technocult, also known as the Machine cult is the semi-offical name given by
The Church
of the Crossed Heart to followers of the Mechanicum faith who supply and maintain virtually all
of the church's technology, engineering and industry.
Although they serve with the Church of the Crossed Heart they have their own version of worship
that differs substantially in theology and ritualistic forms from that of
The Twelve Angels .
Instead the Technocult worships a deity they call the Machine god or Omnissiah. The Technocult believes
that knowledge is divine and comes only form the Omnissiah thus making any objects that demonstrate
the application of knowledge , i.e machinery, or contain it (books) holy in the eyes/optical implants
of the Techcult. The Technocult regard organic flesh as weak and imperfect, with
the Rot being veiwed as a divine
message from the Omnissah demonstrating its weakness, thus making its removal and replacement by
mechanical, bionic parts a sacred process that brings them closer to their god with many of its older
members having very little of their original bodies remaining.
The date of the cults formation is unknown, or a closely guarded secret...
1. Saying you're doing Agile just cos you're doing daily stand-ups. You're not doing agile. There
is so much more to agile practices than this! Yet I'm surprised how often I've heard that story.
It really is remarkable.
... ... ....
3. Thinking that agile is a silver bullet and will solve all your problems. That's so naiive,
of course it won't! Humans and software are a complex mix with any methodology, let alone with an
added dose of organisational complexity. Agile development will probably help with many things, but
it still requires a great deal of skill and there is no magic button.
... ... ...
8. People who use agile as an excuse for having no process or producing no documentation. If documents
are required or useful, there's no reason why an agile development team shouldn't produce them. Just
not all up-front; do it as required to support each feature or iteration. JFDI (Just F'ing Do It)
is not agile!
David, 23 February 2010 at 1:21 am
So agree on number 1. Following "Certified" Scrum Master training (prior to the exam requirement),
a manager I know now calls every regular status meeting a "scrum", regardless of project or methodology.
Somehow the team is more agile as a result.
Ironically he pulled up another staff member for "incorrectly" using the term retrospective.
Andy Till, 23 February 2010 at 9:28 am
I can think of far worse, how about pairing with the guy in the office who is incapable of
compromise?
Steve Watson, 13 May 2010 at 10:06 am
Kelly
Good list!
I like number 9 as I find with testing people think that they no longer need to write proper
test cases and scripts – a list of confirmations on a user story will do. Well, if its a simple
change I guess you can dispense with test scripts, but if its something more complex then there
is no reason NOT to write scripts. If you have a reasonably large team of people who could execute
the tests, they can follow the test steps and validate against the expected results. It also means
that you can sensibly lump together test cases and cover them with one test.
If you dont think about how you will execute them and just tackle them one by one off the confirmations
list, you miss the opportunity to run one test and cover many separate cases, saving time.
I always find test scripts useful if someone different re-runs a test, as they then follow
the same process as before. This is why we automate regression so the tests are executed the same
each time.
John Quincy, 24 October 2011 at 12:02 am
I am not a fan of agile. Unless you have a small group of developers who are in perfect sync
with each other at all times, this "one size fits all" methodology is destructive and downright
dangerous. I have personally witnessed a very good company go out of business this year because
they transformed their development shop from a home-grown iterative methodology to SCRUM. The
team was required to abide by the SCRUM rules 100%. They could not keep up with customer requirements
and produced bug filled releases that were always late. These developers went from fun, friendly,
happy people (pre-SCRUM) [who NEVER missed a date] to bitter, sarcastic, hard to be around 'employees'.
When the writing was on the wall a couple of months back, the good ones got the hell out of there,
and the company could not recover.
Some day, I'm convinced that Beck through Thomas will proclaim that the Agile Manifesto was
all a big practical joke that got out of control.
This video pretty much lays out the one and only reason why management wants to implement Agile:
It's a cycle of violence when a project claims to be Agile just because of standups and iterations
and don't think about resolving the core challenges they've had to begin with. People are left
still battling said challenges and then say that Agile sucks.
"... while DevOps is appealing to startups, there are important stumbling blocks in the path of DevOps adoption within the enterprise. ..."
"... The tools needed to implement a DevOps culture are lacking. While some of the tools can be provided by vendors and others can be created within the enterprise, a process which takes a long period of time, "there is a marathon of organizational change and restructuring that must occur before such tools could ever be bought or built." ..."
Rachel Shannon-Solomon suggests that most enterprises are not ready for DevOps, while Gene Kim says
that they must make themselves ready if they want to survive.
While acknowledging that large companies such as Google and Facebook benefit from implementing
DevOps, and that "there is no lack of appetite to experiment with DevOps practices" within "Fortune
500s and specifically financial services firms", Shannon-Solomon remarks that "there are few true
change agents within enterprise IT willing to affect DevOps implementations."
Shehas come to this conclusion basedon "conversations with startup founders, technology incumbents
offering DevOps solutions, and technologists within large enterprises."
Shannon-Solomon brings four arguments to support her position:
Siloed structures and organizational change . According to Shannon-Solomon, enterprises
create siloed structures between teams because that's how large organizations maximize value results,
and one of DevOps'main purpose is to bring thosesilos down. Sometimes a large company relies on
another vendor for operational support, and such vendors "tend to bring their own degree of siloing."
Also, it is expensive for an enterprise to invest into "holistic solutions to facilitate DevOps"
after they "invested heavily on integration for existing solutions."
Buy vs. build.The tools needed to implement a DevOps culture are lacking. While
some of the tools can be provided by vendors and others can be created within the enterprise,
a process which takes a long period of time, "there is a marathon of organizational change and
restructuring that must occur before such tools could ever be bought or built."
Vendors of DevOps solutions acknowledge that when selling to the enterprise, they are trying
to sell a cultural revolution. According to Shannon-Solomon, it is hard to introduce DevOps
to development teams because Vendors need to first win over individual developers with the efficiency
of their solution, for example, by initially eschewing procurement and offering a sandbox environment
directly to developers to test out the environment.
Selling DevOps toolkits to the enterprise means facing the well-documented challenges of navigating
procurement and a mindset currently more primed to build than buy DevOps tools.
Return on investment . Shannon-Solomon cites a senior IT professional working at an
investment bank saying that his company "has not been very successful at tracking DevOps projects
occurring within individual business units using homegrown tools, and any evaluation of a project's
success has been limited to anecdotal assessments and perceived results."
Shannon-Solomonends her post wondering "how long will it be until enterprises are forced to accept
that they must accelerate their experiments with DevOps" and hoping that "more individual change
agents within large organizations may emerge" in the future.
No methodology can substitute good engineers who actually talk to and work with each other. Good
engineers can benefit from a better software development methodology, but even the best software development
methodology is powerless to convert mediocre developers into stars.
Notable quotes:
"... disorganized and never-ending ..."
"... Agile is to proper software engineering what Red Bull Flugtag is to proper aeronautic engineering.... ..."
"... As TFA points out, that always works fine when your requirements are *all* known an are completely static. That rarely happens in most fields. ..."
"... The problem with Agile is that it gives too much freedom to the customer to change their mind late in the project and make the developers do it all over again. ..."
"... If you are delivering to customer requests you will always be a follower and never succeed. You need to anticipate what the customers need. ..."
"... It frequently is. It doesn't matter what methodology you use -- if you change major features/priorities at the last minute it will cost multiple times as much. Yet frequently customers expect it to be cheap because "we're agile". And by accepting that change will happen you don't push the customers to make important decisions early, ensuring that major changes will happen, instead of just being possible. ..."
"... The problem with all methodologies, or processes, or whatever today's buzzword is, is that too many people want to practice them in their purest form. Excessive zeal in using any one approach is the enemy of getting things done. ..."
"... On a sufficiently large project, some kind of upfront design is necessary. ..."
"... If you insist on spinning back every little change to a monstrously detailed Master Design Document, you'll move at a snail's pace. As much as I hate the buzzword "design patterns", some pattern is highly desirable. ..."
"... there is no substitute for good engineers who actually talk to and work with each other. ..."
"... If you don't trust those people to make intelligent decisions (including about when things do have to be passed up) then you've either got the wrong people or a micromanagement fetish ..."
"... The problem the article refers to about an upfront design being ironclad promises is tough. Some customers will work with you, and others will get their lawyers and "systems" people to waste your time complaining about every discrepancy, ..."
"... In defense everything has to meet spec, but it doesn't have to work. ..."
"... There is absolutely no willingness to make tradeoffs as the design progresses and you find out what's practical and necessary and what's not. ..."
"... I'll also admit that there is a tendency to get sloppy in software specs because it is easier to make changes. Hardware, with the need to order materials, have things fabbed, tape out a chip, whatever, imposes a certain discipline that's lacking when you know you can change the source code at anytime. Being both, I'm not saying this is because hardware engineers are virtuous and software engineers are sloppy, but because engineers are human (at least some of them). ..."
"... Impressive stuff, and not unique to the space shuttle. Fly-by-wire systems are the same way. You're talking DO-178B [wikipedia.org] Level A stuff. It works, and it's very very expensive. If it was only 10x the cost of normal software development I'd be amazed. I agree that way too much software is poorly planned and implemented crap, and part of the reason is that nobody wants realistic cost estimates or to make the difficult decisions about what it's supposed to do up-front. But what you're talking about is aerospace quality. You couldn't afford a car or even a dishwasher made to those standards. ..."
This article discusses how some experienced developers have changed that perception. '... She's
been frustrated by her Agile experiences - and so have her clients.
"There is no process. Things fly all directions, and despite SVN [version control] developers
overwrite each other and then have to have meetings to discuss why things were changed. Too many
people are involved, and, again, I repeat, there is no process.' The premise here is not that Agile
sucks - quite to the contrary - but that developers have to understand how Agile processes can make
users anxious, and learn to respond to those fears. Not all those answers are foolproof.
For example: 'Detailed designs and planning done prior to a project seems to provide a "safety
net" to business sponsors, says Semeniuk. "By providing a Big Design Up Front you are pacifying this
request by giving them a best guess based on what you know at that time - which is at best partial
or incorrect in the first place." The danger, he cautions, is when Big Design becomes Big Commitment
- as sometimes business sponsors see this plan as something that needs to be tracked against.
"The big concern with doing a Big Design up front is when it sets a rigid expectation that
must be met, regardless of the changes and knowledge discovered along the way," says Semeniuk.' How
do you respond to user anxiety from Agile processes?"
Shinobi
Agile summed up (Score:5, Funny)
Agile is to proper software engineering what Red Bull Flugtag is to proper aeronautic engineering....
Nerdfest
Re: doesn't work
As TFA points out, that always works fine when your requirements are *all* known an are
completely static. That rarely happens in most fields.
Even in the ones where it does it's usually just management having the balls to say "No, you
can give us the next bunch of additions and changes when this is delivered, we agreed on that".
It frequently ends up delivering something less than useful.
MichaelSmith
Re: doesn't work (Score:5, Insightful)
The problem with Agile is that it gives too much freedom to the customer to change their
mind late in the project and make the developers do it all over again.
ArsonSmith
Re: doesn't work (Score:4, Insightful)
...but they can be trusted to say what is most important to them at the time.
No they can't. If you are delivering to customer requests you will always be a follower
and never succeed. You need to anticipate what the customers need. As with the I guess made
up quote attributed to Henry Ford, "If I listened to my customers I'd have been trying to make
faster horses." Whether he said it or not, the statement is true. Customers know what they have
and just want it to be faster/better/etc you need to find out what they really need.
AuMatar
Re: doesn't work (Score:5, Insightful)
It frequently is. It doesn't matter what methodology you use -- if you change major features/priorities
at the last minute it will cost multiple times as much. Yet frequently customers expect it to
be cheap because "we're agile". And by accepting that change will happen you don't push the customers
to make important decisions early, ensuring that major changes will happen, instead of just being
possible.
ebno-10db
Re: doesn't work (Score:5, Interesting)
"Proper software engineering" doesn't work.
You're right, but you're going to the other extreme. The problem with all methodologies,
or processes, or whatever today's buzzword is, is that too many people want to practice them in
their purest form. Excessive zeal in using any one approach is the enemy of getting things done.
On a sufficiently large project, some kind of upfront design is necessary. Spending too much
time on it or going into too much detail is a waste though. Once you start to implement things,
you'll see what was overlooked or why some things won't work as planned. If you insist on
spinning back every little change to a monstrously detailed Master Design Document, you'll move
at a snail's pace. As much as I hate the buzzword "design patterns", some pattern is highly desirable.
Don't get bent out of shape though when someone has a good reason for occasionally breaking that
pattern or, as you say, you'll wind up with 500 SLOC's to add 2+2 in the approved manner.
Lastly, I agree that there is no substitute for good engineers who actually talk to and
work with each other. Also don't require that every 2 bit decision they make amongst themselves
has to be cleared, or even communicated, to the highest levels. If you don't trust those people
to make intelligent decisions (including about when things do have to be passed up) then you've
either got the wrong people or a micromanagement fetish. Without good people you'll never
get anything decent done, but with good people you still need some kind of organization.
The problem the article refers to about an upfront design being ironclad promises is tough.
Some customers will work with you, and others will get their lawyers and "systems" people to waste
your time complaining about every discrepancy, without regard to how important it is. Admittedly
bad vendors will try and screw their customers with "that doesn't matter" to excuse every screw-up
and bit of laziness. For that reason I much prefer working on in-house projects, where "sure we
could do exactly what we planned" gets balanced with the cost and other tradeoffs.
The worst example of those problems is defense projects. As someone I used to work with said:
In defense everything has to meet spec, but it doesn't have to work. In the commercial
world specs are flexible, but it has to work.
If you've ever worked in that atmosphere you'll understand why every defense project costs
a trillion dollars. There is absolutely no willingness to make tradeoffs as the design progresses
and you find out what's practical and necessary and what's not. I'm not talking about meeting
difficult requirements if they serve a purpose (that's what you're paid for) but being unwilling
to compromise on any spec that somebody at the beginning of the project pulled out of their posterior
and obviously doesn't need to be so stringent. An elephant is a mouse built to government specifications.
Ok, you can get such things changed, but it requires 10 hours from program managers for every
hour of engineering. Conversely, don't even think about offering a feature or capability that
will be useful and easy to implement, but is not in the spec. They'll just start writing additional
specs to define it and screw you by insisting you meet those.
As you might imagine, I'm very happy to be back in the commercial world.
Anonymous Coward
Re: doesn't work (Score:2, Interesting)
You've fallen into the trap of using their terminology. As soon as 'the problem' is defined
in terms of 'upfront design', you've already lost half the ideological battle.
'The problem' (with methodology) is that people want to avoid the difficult work of thinking
hard about the business/customer's problem and coming up with solutions that meet all their needs.
But there isn't a substitute for thinking hard about the problem and almost certainly never will
be.
The earlier you do that hard thinking about the customer's problems that you are trying to
solve the cheaper, faster and better quality the result will be. Cheaper? Yes, because bugfixing
that is done later in the project is a lot more expensive (as numerous software engineering studies
have shown) Faster? Yes, because there's less rework. (Also, since there is usually a time = money
equivalency, you can't have it done cheap unless it is also done fast. Higher quality? Yes, because
you don't just randomly stumble across quality. Good design trumps bad design every single time.
... ... ...
ebno-10db
Re: doesn't work (Score:4, Interesting)
Until the thing is built or the software is shipped there are many options and care should
be taken that artificial administrative constraints don't remove too many of them.
Exactly, and as someone who does both hardware and software I can tell you that that's better
understood by Whoever Controls The Great Spec in hardware than in software. Hardware is understood
to have physical constraints, so not every change is seen as the result of a screw-up. It's a
mentality.
I'll also admit that there is a tendency to get sloppy in software specs because it is easier
to make changes. Hardware, with the need to order materials, have things fabbed, tape out a chip,
whatever, imposes a certain discipline that's lacking when you know you can change the source
code at anytime. Being both, I'm not saying this is because hardware engineers are virtuous and
software engineers are sloppy, but because engineers are human (at least some of them).
This is my evidence that "proper software engineering" *can* work. The fact that most businesses
(and their customers) are willing to save money by accepting less from their software is not the
fault of software engineering. We could and did build buildings much faster than we do today,
if you are willing to make more mistakes and pay more in human lives. If established industries
and their customers began demanding software at that higher standard and were willing to pay for
it like it was real engineering, then maybe it would happen more often.
Impressive stuff, and not unique to the space shuttle. Fly-by-wire systems are the same way.
You're talking DO-178B [wikipedia.org] Level A stuff. It works, and it's very very expensive.
If it was only 10x the cost of normal software development I'd be amazed. I agree that way too
much software is poorly planned and implemented crap, and part of the reason is that nobody wants
realistic cost estimates or to make the difficult decisions about what it's supposed to do up-front.
But what you're talking about is aerospace quality. You couldn't afford a car or even a dishwasher
made to those standards.
donscarletti
Re: doesn't work (Score:3)
260 people maintaining 420,000 lines of code, written to precise externally provided specifications
that change once every few years.
This is fine for NASA, but if you want something that does roughly what you need before your competitors
come up with something better, you'd better find some better programmers.
In light of all the hype, we have created a DevOps parody Series – DevOps: Fact or Fiction .
For
those of you who did not see, in October we created an entirely separateblog(inspired by
this ) – however decided that
it is relevant enough to transform into a series on the
AppDynamics Blog . The series will point
out the good, the bad, and the funny about IT and DevOps. Don't take anything too seriously – it's
nearly 100% stereotypes : ).
Stay tuned for more DevOps: Fact or Fiction to come. Here we go
"... Start-ups taught us this. Good developers can be passable DBAs, if need be. They make decent testers, "deployment engineers", and whatever other ridiculous term you'd like to use. Their job requires them to know much of the domain of "lower" roles. There's one big problem with this, and hopefully by now you see it: It doesn't work in the opposite direction. ..."
"... An example will make this more clear. My dad is a dentist running his own practice. He employs a secretary, hygienist, and dental assistant. Under some sort of "DentOps" movement, my dad would be making appointments and cleaning people's teeth while trying to find time to drill cavities, perform root canals, etc. My dad can do all of the other jobs in his office, because he has all the specialized knowledge required to do so. But no one, not even all of his employees combined, can do his job. ..."
"... Such a movement does a disservice to everyone involved, except (of course) employers. What began as an experiment aimed at increasing software quality has become a farce, where the most talented employees are overworked (while doing less, less useful work) and lower-level positions simply don't exist. ..."
"... you're right. Pure DevOps is no more efficient or sensible than pure Agile (or the pure Extreme Programming or the pure Structured Programming that preceeded it). The problem is purists and ideological zealotry not the particular brand of religion in question. Insistence on adherence to dogma is the problem as it prevents adoption of flexible, 'fit for purpose' solutions. Exposure to all the alternatives is good. Insisting that one hammer is ideal for every sort of nail we have and ever will have is not. ..."
"... There are developers who have a decent set of skills outside of development in QA, Operations, DB Admin, Networking, etc. Equally so, there are operations engineers who have a decent set of skills outside of operations in QA, Development, DB Admin, networking, etc. Extend this to QA and other disciplines. What I have never seen is one person who can perform all those jobs outside of their main discipline with the same level of professionalism, experience and acumen that each of those roles require to do it well at an Enterprise/World Class level. ..."
"... I prefer to think of DevOps as more of a full-stack team concept. Applying the full-stack principle at the individual levels is not sustainable, as you point out. ..."
"... DevOps roles are strictly automation focused, at least according to all job specifications I see on the internet. They don't need any development skills at all. To me it looks like a new term for what we used to call IT Operations, but more scripting/automation focused. DevOps engineer will need to know Puppet, Chef, Ansible, OS management, public cloud management, know how to set up monitoring, logging and all that stuff usual sysadmin used to do but in the modern world. In fact I used to apply for DevOps roles but quickly changed my mind as it turned out no companies need a person wearing many hats, it has absolutely nothing to do with creating software. Am I wrong? ..."
There are two recent trends I really hate: DevOps and the notion of the "full-stack" developer.
The DevOps movement is so popular that I may as well say I hate the x86 architecture or monolithic
kernels. But it's true: I can't stand it. The underlying cause of my pain? This fact: not every
company is a start-up, though it appears that every company must act as though they were.
DevOps
"DevOps" is meant to denote a close collaboration and cross-pollination between what were previously
purely development roles, purely operations roles, and purely QA roles. Because software needs to
be released at an ever-increasing rate, the old "waterfall" develop-test-release cycle is seen as
broken. Developers must also take responsibility for the quality of the testing and release environments.
The increasing scope of responsibility of the "developer" (whether or not that term is even appropriate
anymore is debatable) has given rise to a chimera-like job candidate: the "full-stack" developer.
Such a developer is capable of doing the job of developer, QA team member, operations analyst, sysadmin,
and DBA. Before you accuse me of hyperbole, go back and read that list again. Is there any role in
the list whose duties you wouldn't expect a "full-stack" developer to be well versed in?
Where did these concepts come from? Start-ups, of course (and the Agile methodology). Start-ups
are a peculiar beast and need to function in a very lean way to survive their first few years. I
don't deny this . Unfortunately, we've taken the multiple technical roles that engineers at start-ups
were forced to play due to lack of resources into a set of minimum qualifications for the
role of "developer".
Many Hats
Imagine you're at a start-up with a development team of seven. You're one year into development
of a web applications that X's all the Y's and things are going well, though it's always a frantic
scramble to keep everything going. If there's a particularly nasty issue that seems to require deep
database knowledge, you don't have the liberty of saying "that's not my specialty," and handing it
off to a DBA team to investigate. Due to constrained resources, you're forced to take on the role
of DBA and fix the issue yourself.
Now expand that scenario across all the roles listed earlier. At any one time, a developer at
a start-up may be acting as a developer, QA tester, deployment/operations analyst, sysadmin, or DBA.
That's just the nature of the business, and some people thrive in that type of environment. Somewhere
along the way, however, we tricked ourselves into thinking that because, at any one time, a start-up
developer had to take on different roles he or she should actually be all those things
at once.
If such people even existed , "full-stack" developers still wouldn't be used as they should.
Rather than temporarily taking on a single role for a short period of time, then transitioning
into the next role, they are meant to be performing all the roles, all the time . And here's what
really sucks: most good developers can almost pull this off.
The Totem Pole
Good developers are smart people. I know I'm going to get a ton of hate mail, but there is
a hierarchy of usefulness of technology roles in an organization. Developer is at the top, followed
by sysadmin and DBA. QA teams, "operations" people, release coordinators and the like are at the
bottom of the totem pole. Why is it arranged like this?
Because each role can do the job of all roles below it if necessary.
Start-ups taught us this. Good developers can be passable DBAs, if need be. They make decent
testers, "deployment engineers", and whatever other ridiculous term you'd like to use. Their job
requires them to know much of the domain of "lower" roles. There's one big problem with this, and
hopefully by now you see it: It doesn't work in the opposite direction.
A QA person can't just do the job of a developer in a pinch, nor can a build-engineer do the job
of a DBA. They never acquired the specialized knowledge required to perform the role. And
that's fine. Like it or not, there are hierarchies in every organization, and people have different
skill sets and levels of ability. However, when you make developers take on other roles, you don't
have anyone to take on the role of development!
An example will make this more clear. My dad is a dentist running his own practice. He employs
a secretary, hygienist, and dental assistant. Under some sort of "DentOps" movement, my dad would
be making appointments and cleaning people's teeth while trying to find time to drill cavities, perform
root canals, etc. My dad can do all of the other jobs in his office, because he has all the specialized
knowledge required to do so. But no one, not even all of his employees combined, can do his job.
Such a movement does a disservice to everyone involved, except (of course) employers. What
began as an experiment aimed at increasing software quality has become a farce, where the most talented
employees are overworked (while doing less, less useful work) and lower-level positions simply don't
exist.
And this is the crux of the issue. All of the positions previously held by people of various levels
of ability are made redundant by the "full-stack" engineer. Large companies love this, as it means
they can hire far fewer people to do the same amount of work. In the process, though, actual development
becomes a vanishingly small part of a developer's job . This is why we see so many developers
that can't pass FizzBuzz: they never really had to write any code. All too common a question now,
can you imagine interviewing a chef and asking him what portion of the day he actually devotes to
cooking?
Jack of All Trades, Master of None
If you are a developer of moderately sized software, you need a deployment system in place. Quick,
what are the benefits and drawbacks of the following such systems: Puppet, Chef, Salt, Ansible, Vagrant,
Docker. Now implement your deployment solution! Did you even realize which systems had no business
being in that list?
We specialize for a reason: human beings are only capable of retaining so much knowledge. Task-switching
is cognitively expensive. Forcing developers to take on additional roles traditionally performed
by specialists means that they:
aren't spending their time developing
need to keep up with an enormous domain of knowledge
are going to burn out
What's more, by forcing developers to take on "full-stack" responsibilities, they are paying their
employees far more than the market average for most of those tasks. If a developer makes 100K
a year, you can pay four developers 100K per year to do 50% development and 50% release management
on a single, two-person task. Or, simply hire a release manager at, say, 75K and two developers
who develop full-time. And notice the time wasted by developers who are part time release-managers
but don't always have releases to manage.
Don't Kill the Developer
The effect of all of this is to destroy the role of "developer" and replace it with a sort of
"technology utility-player". Every developer I know got into programming because they actually enjoyed
doing it (at one point). You do a disservice to everyone involved when you force your brightest people
to take on additional roles.
Not every company is a start-up. Start-ups don't make developers wear multiple hats by choice,
they do so out of necessity. Your company likely has enough resource constraints without you inventing
some. Please, don't confuse "being lean" with "running with the fewest possible employees". And for
God's sake, let developers write code!
Some background... I started life as a dev (30years ago), have mostly been doing sysadmin and
project tech lead sorts of work for the last 15. I've always assumed the DevOps movement was resulting
in sub-par development and sub-par sysadmin/ops precisely because people were timesharing their
concerns.
But what it does bring to the party is a greater level of awareness of the other guys problems.
There's nothing quite like being rung out of bed at 3am to motivate a developer to improve his
products logging to make supporting it easier. Similarly the admin exposed to the vagaries of
promoting things into production in a supportable, repeatable, deterministic manner quickly learns
to appreciate the issues there. So DevOps has served a purpose and has offered benefits to the
organisations that signed on for it.
But, you're right. Pure DevOps is no more efficient or sensible than pure Agile (or the
pure Extreme Programming or the pure Structured Programming that preceeded it). The problem is
purists and ideological zealotry not the particular brand of religion in question. Insistence
on adherence to dogma is the problem as it prevents adoption of flexible, 'fit for purpose' solutions.
Exposure to all the alternatives is good. Insisting that one hammer is ideal for every sort of
nail we have and ever will have is not.
I'm very disappointed to see this kind of rubbish. It's this type of egocentric thinking and generalization
that the developer is an omniscient deity requiring worshiping and pampering that prevents DevOps
from being successful. Based on the tone and your perspective it sounds like you've been doing
DevOps wrong.
A developer role alone is not the linchpin that keeps DevOps humming - instead it's the respect
that each team member holds for each discipline and each team member's area of expertise, the
willingness of the entire team to own the product, feature delivery and operational stability
end to end, to leverage each others skills and abilities, to not blame Dev or Ops or QA for failure,
and to share knowledge.
There are developers who have a decent set of skills outside of development in QA, Operations,
DB Admin, Networking, etc. Equally so, there are operations engineers who have a decent set of
skills outside of operations in QA, Development, DB Admin, networking, etc. Extend this to QA
and other disciplines. What I have never seen is one person who can perform all those jobs outside
of their main discipline with the same level of professionalism, experience and acumen that each
of those roles require to do it well at an Enterprise/World Class level.
If you're a developer doing QA and operations, you're doing it because you have to, but there
should be no illusion that you're as good in alternate roles as someone trained and experienced
in those disciplines. To do so is a disservice to yourself and your organization that signs your
paycheck. If you're in this situation and you'd prefer making a difference rather than spewing
complains, I would recommend talking to your manager and above about changing their skewed vision
of DevOps. If they aren't open to communication, collaboration, experimentation and continual
improvement, then their DevOps vision is dysfunctional and they're not supporting DevOps from
the top down. Saying your DevOps and not doing it is *almost* more egregious than saying the developer
is the top of a Totem Pole of existence.
he prefaced it with 'crybabies please ignore' It's his opinion. That everyone but the lower totem
pole people agree with so.. agree to disagree. I also don't think being at the bottom of the totem
pole is a big f'in deal. If you're getting paid.. embrace it! So many other ways to enjoy life!
The top dog people have all the pressure and die young! 99% of the people on earth dont know the
difference between one nerd and another. And other nerds are always going to be egomaniacs who
will find some way to justify their own superiority no matter what your achievements. So this
kind of posturing is a waste of time.
I think there's a problem with your definition of DevOps. It doesn't mean developers have to be
"full-stack" or do ops stuff. And it doesn't mean "act like a startup." It simply means, at its
basis, that Developers and Operations work well together and do not have any communication barriers.
This is why I hate DevOps as a title or department, because DevOps is a culture.
Let's take your DentOps example. The dentist has 3 support staff. What if they rarely spoke
to the dentist? What if they were on different floors of the building? What if the dentist wrote
an email about how teeth should be cleaned and wasn't available to answer questions or willing
to consider feedback? What if once in a while the dentist needed to understand enough about the
basics of appointment scheduling to point out problems with the system? Maybe appointments are
being scheduled too close together. Would the patients get backed up throughout the day because
that's the secretary's problem? Of course not. Now we'd be getting into a more accurate analogy
to DevOps. If anything a dentist's office is ALREADY "DentOps" and the whole point of DevOps is
to make the dev/ops interaction work in a logical culture that other industries (like dentists)
already use!
I would tend to agree with some of that. Being able to trouble shoot network issues using monitoring
tools like Fiddler is a good thing to be aware of. I can also see a lot of companies using it
as a way to make one person do everything. Moreover, there are probably folks out there that perpetuate
that behavior by taking on the machismo argument.
By saying that if I can do it that you should be able to do it too or else you're not as good
of a developer as I am. I have never heard anyone outright claim this, but I've seen this attitude
time and time again from ambitious analysts looking to get a leg up, a pay raise, and a way to
template their values on the rest of the team. One of the first things that you're taught as a
dev is that you can't hope to know it all.
Your responsibility first and foremost as a developer is the stability and reliability of your
code and the services that you provide. In some industries this is literally a matter of life
and death(computers in your car, mission critical medical systems). It doesn't work for everyplace.
I wouldn't want to pay a receptionist 200k a year like a dentist though. Learn to hire better
receptionists. Even a moderately charming woman can create more customer loyalty, and cheaper,
than the best dentist in the world. I want my dentist to keep quiet and have a steady hand. I
want my receptionist to engage me and acknolwedge my existence.
I want my secretary to be a multitasking master. I want my dentist not to multitask at all
- OUCH!
Good points, I tend to agree. I prefer to think of DevOps as more of a full-stack team concept.
Applying the full-stack principle at the individual levels is not sustainable, as you point out.
The full-stack DevOps team will have team members with primary skills in either of the traditional
specialties, and will, over time, develop decent secondary skills. But the value is not in people
constantly content switching - that actually kills efficiency. The value is in developers understanding
and developing an open relationship with testing and operations - and vice versa. And this cooperation
is inhibited by putting people in separate teams with conflicting goals. DevOps in practice is
not a despecialization. It's bringing the specialists together.
The more isolated or silo'd developers become, the less they realize what constitutes delivering
software, and the more problems are contributed to the IT process of test/build/release/scale/monitor,
etc. Writing code is a small fraction of that delivery process. I've written about the success
of devops and microservices that touches on this stuff because they're highly related. The future
success of devops/microservices/cloud/etc isn't related to technology insofar as it is culture:
http://blog.christianposta....
Great article and you're definitely describing one form of dysfunctional organisation where DevOps,
Agile, Full Stack, and every other $2 word has been corrupted to become a cost cutting justification;
cramming more work onto people who aren't skilled for it, and eho end up not having any time to
do what they were hired as experts for!
But I'd also agree with other posters that it's a little developer centric. I'm a terrible
programmer and a great DBA. I can tell you most programmers who try to be DBAs are equally terrible.
It's definitely not "doing the job of the receptionist" 😄
And we shouldn't forget what DevOps is meant to be about; teams making sure nobody gets called
at night to fix each other's messes. That means neither developers with shitty deployments straight
to production nor operations letting the disks silently fill because "if it ain't C: it ain't
our problem."
I know of 0 developers that can manage a network of any appreciable scale.
In cloud and large enterprise networks, if there were a totem (which there isn't) using your
methodology would place the dev under the network engineer. Their software implements the protocol
and configuration intent of the NE. Good thing the whole concept is a pile of rubbish. I think
you fell into the trap you called out which is thinking at limited scale.
It's true. We can all create LAN's at home but I wouldn't dare f with a corporate network and
risk shutting down amazon for a day. Which seems to happen quite a bit.... maybe they're DEVOPPING
a bit too much.
Jeff Knupp is to one side of the spectrum. DevOps Reaper is to the other side.
Enno is more
attune to what is really going on. So I won't repeat any of those arguments.
However I will ask you to put me in a box. What am I?
I graduated as a Computer Engineer (hybrid between Electrical Engineering and Computer Science).
I don't say that anymore as companies have no idea as to what that means. So I called myself a
Digital Electronics and Software Engineer for a while. The repeated question was all too often:
"So what are you, software or hardware?"
I spent my first few years working down from board design, writing VHDL and Verilog, to embedded
software in C and C++, then algorithms in optimization with the CUDA framework in C, with C++
wrappers and C# for the logic tier. Then worked another few year in particle physics with C++
compute engines with x86 assembly declarations for speed and C# for WPF UIs.
After that I went to work for a wind turbines company as system architect where is was mostly
embedded and programming ARM Cortex microprocessors, high power electronics controls, custom service
and diagnostics tools in C#. Real-time web based dashboards with Angular, Bootrap, and the likes
for a good looking web app.
Nowadays I'm working with mobile first web applications that have a massive backend to power them.
It is mostly a .NET stack form Entity Framework, to .NET WebAPI, to Angular power font ends. This
company is not a start up but it is a small company. Therefore I wear the many hats. I introduced
the new software life cycle with includes continuous integration and continuous deployment. Yes,
I manage build servers, build tools, I develop, I'm QA, I'm a tester, I'm a DBA., I'm the deployment
and configuration manager.
If you are wondering I have resorted to start calling a full stack developer. It has that edgy
sound that companies like to hear. I'm still a young developer. I've only been developing for
10 years.
In my team we are all "Jack of all Trades" and "Masters of Many". We switch tasks and hats
because it is fun and keep everyone from getting bored/stuck. Our process is called "Best practices
that work for this team".
So, I think of myself as a software engineer. I think I'm a developer. I think I'm DevOps,
I think I'm QA.
Lets start with that DevOps didn't come from startups. It came from Boeing mainly, and a few other
major blue chip IT shops, investing heavily in systems management technology around the turn of
the century. The goal at the time was simply to change the ratio of servers to IT support personnel,
and the re-thinking and re-organizing of development and operations into one organization with
one set of common goals. The 'wearing many hats' thing you discuss is a feature of startups, but
that feature is independent of siloed or integrated organizations.
I prefer the 'sportzing' analogy of basketball and football. Football has specialist teams
that are largely functionally independent because they focus on distinct goals. Basketball has
specialist positions, but the whole team is focused on the same goals. I'm not saying one sport
is better than the other. I am saying the basketball mentality works better in the IT environment.
Delivering the product or service to the customer is the common goal that everyone should be thinking
about, and how the details of their job fits into that overall picture. It sounds like to me you
are really saying "Hey, its my job and only my job to think about how it all fits together and
works"
Secondly, while it is pretty clear that the phrase 'full stack engineer' is about as useful
as "Cloud Computing", your perspective that somehow developers are the 'top' of the tree able
to do any job is very mistaken. There are key contributors from every specialty who have that
ability, and more useful names for them are things like "10x", or "T-shaped". Again, you are describing
a real situation, but correlating it with unrelated associations. It is just as likely, and just
as valuable, to find an information architect who can also code, or a systems admin that can also
diagnose database performance, or an electrician that can also hang sheetrock. Those people do
fit your analogy of 'being on top', because they are not siloed and stovepiped into just their
speciality.
The DevOps mindset fosters this way of thinking, instead of the old and outdated specialist
way of thinking you are defending. Is it possible your emotional reaction is fear based against
the possibility that your relative value will decrease if others start thinking outside their
boxes?
Interesting to note that Agile also started at Boeing, but 10 years earlier. I live in the
startup world of Seattle, but know my history and realize that much of what appears new is actually
just 'new to you'(or me), and that most of cutting edge technology and thinking is just combining
ideas from other industries in new ways.
The problem is that developers are trained to crank out code and hope that QA teams will find
problems, many times not even sure how to catch holes. DevOps trains people to think critically
and do both. It isn't killing developers, it is making them look like noobs while phasing them
out.
Yeah, good luck with that attitude. Your company's gonna have a good'ole time looking for and
keeping new developer talent. Because as we all know, smart people love working with dummies.
I'd love to see 'your QA' team work on our 'spatial collision algorithm' and make our devs "look
like noob". You sound like most middle management schmucks.
Funniest article so far on full stack. It's a harsh reality for devs, because were asked to do
everything, know everything, so how can you really believe QA or DBA can do the job of someone
like that? There is a crazy amount of hours a fullstack dev invests in aquiring that kind of knowledge,
not to mention some people are also talented at their job. Imagine trying to tell the QA to do
that? Maybe for a few hours someone can be a backup just in case something happens, but really
it's like replacing the head surgeon.
The best skill you can learn in your coding career is your next career. Noone wants a 45 year
old coder.
I see so much time wasted learning every new thing when you should just be plugging away to
get the job done, bank the $$, and move on. All your accumulated skills will be worthless in a
decade or so, and your entire knowledge useless in 2 decades. My ability to turn a wrench is what's
keeping me from the poor house. And I have a engineering degree from UIUC! I also don't mind.
Think about a 100 week as a plumber with OT in a reasonably priced neighborhood, vs a coder. Who
do you think is making more? Now I'm not saying you cant survive into your 50's programming, but
typically they get retired forcefully, and permanently.. by a heart attack!
But rambling aside.. the author makes a good point and i think is the future of big companies
in tech. The current model is driven by temporary factors. Ideally you'd have a specialized workforce.
But I think that as a programmer you are in constant fear of being obsolete so you don't want
to be pigeon-holed. It's just not mathematically possible to have that 10,000 hour mastery in
50 different areas.. unless you are Bill Murray in Groundhog Day.
A developer who sees himself at the top of a pyramid. Not surprising, your myopic and egotistical
view. I laugh at people who code a few SELECT statements and think they can fill the DBA role.
HA HA HA. God, the arrogance. "Well it worked on my machine." - How many sys admins have heard
this out of a developers mouth. Unfortunately, projects get stuck with supporting such issues
because that very ego has led the developer too far down the road to turn back. They had no common
sense or modesty to call on the knowledge of their Sys Ops team to help design the application.
I interview jobs candidates all the time calling themselves full stack simply because they compliment
their programming language of choice with a mere smattering of knowledge in client-side technologies
and can write a few SQL queries. Most developers have NO PERCEPTION of the myriad intricacies
it takes to get an application from their unabated desktop with its FULL ADMIN perms and "unlimited
resources", through a staging/QA environment, and eventually to the securely locked down production
system with limited and perhaps shared or hosted limited resources. Respect of your support teams,
communication and coordination, and the knowledge that you do not know it all. THAT'S being Full
Stack and DevOps sir.
there's always that one query that noone can do in a way that takes less than 2 hours until u
pass it off to a real DBA.. its the 80/20 rule basically. I truly dont believe 'full stack' exists.
It's an illusion. There's always something that suffers.
The real problem is smart people are in such demand we're forced to adapt to this tribal pre-civilization
hodgepodge. Once the industry matures, it'll disappear. Until then they will think they re-invented
the wheel.
'm confused here. DevOps roles are strictly automation focused, at least
according to all job specifications I see on the internet. They don't need any development skills
at all. To me it looks like a new term for what we used to call IT Operations, but more scripting/automation
focused. DevOps engineer will need to know Puppet, Chef, Ansible, OS management, public cloud management,
know how to set up monitoring, logging and all that stuff usual sysadmin used to do but in the modern
world. In fact I used to apply for DevOps roles but quickly changed my mind as it turned out no companies
need a person wearing many hats, it has absolutely nothing to do with creating software. Am I wrong?
It depends on what you mean with development skills. Have you ever tried to automate the deployment
of a large web application? In fact the scripts that automate the deployment of large scalable
web applications are pretty complex softwares which require in-depth thinking and should follow
all the important principles a good developers should know: components isolation, scalability,
maintainability, extensibility, etc..
Successful DevOps doesn't mean a full stack developer does it all, that's only true for a broken
company that succeeded despite bad organization. For example, Twitter's Dev only culture is downright
sick, and ONLY works because they are in the tech field. Mind you, I still believe personally
that it works for them DESPITE its unbalanced structure. In other words, bad DevOps means the
Dev has no extra resources and just more requirements, yea that sucks!....
BUT, on the flip,
Infrastructure works with QA/Build to define supportable deployment standards, they gotta learn
all the automatic bits and practice using them. Now Devs have to package all their applications
properly, in the formats supported by QABuild's CI and Repositories (that 'working just fine'
install script definitly doesn't count). BUT the Dev's get pre-made CI-ready examples, and if
needed, code-migration assistance from the QA/Build team. Pretty soon they learn how to package
that type of app, like a J2EE Maven EAR, or a Webdeploy to IIS.... and the rest should be hadled
for them, as automaticlly as possible by the proactive operations teams.
Make sense? This is how its supposed to work, It sounds like your left alone in a terrible
Dev Only/heavy world. The key to DevOps that is great, and everybody likes, vs. more work... is
having a very balanced work flow between the teams, and making sure the pass-off points are VERY
well defined. Essentially it requires management that cut the responsibility properly, so they
have a shared interest in collaborating. In a Dev heavy organization, the Devs can just throw
garbage over the wall, and operations has to react to constant problems... they start to hate
each other and ..... Dev managers get the idea that they can cut out ops if they do "DevOps",
so then they throw it all at you like right now.
I see in this post so much rubbish and narrow mindedness, so much of the exact stuff that is
killing any type of companies. In the last 10 years I had many roles that required of me, as a
system engineer, to come in and straighten out all kind of really bad compromises developers did
just to make stuff work.
The role never shows the level of intelligence or capabilities. I've seen so many situations
in the last 10 years when smart people with the wrong attitude and awareness are too smart for
anyone's good and limited people still providing more value than a very smart ones acting as if
he is too smart to even have a conversation about anything.
This post is embarrassing for you Jeff, I am sorry for you man.... you just don't get it!
A developer do not have to do full stack, the developer can continue with development, but has
to adopt some things for packaging,testing, and how it is operated.
Operations can continue with operations, but has to know how things are built and packaged.
Developers and operations needs to share things like use the same application server for example.
Developer needs to understand how it is operated to make sure that the code is written in a proper
way. Operations needs to adopt in the need for fast delivery and be able to support a controlled
way of deploying daily into production.
Here is a complementing post I have around the topic
http://bit.ly/1r3iVff
I will share my experience , I started off my career teaching Programming which included database
programming (Oracle-pl/sql, SQL Server-transact sql) which gave good insights into database internals
which landed me into DBA world for last 10 years . During these 10 years where i have worked in
technology companies regarded as top-notch , I have seen very smart Developers writing excellent
application codes but missing out on writing optimized piece to interact with the database. Hence,
I think each Job has a scale and professionals of any group can not do what the top professionals
of other group can do. I have seen Developers with fairly good database internals knowledge and
I have dbas writing code for their automation which can compares well with features of some commercial
database products like TOAD. So , generalization like this does not hold.
The idea that there is a hierachy of usefulness is bunk. Most developers are horrible at operations
because they dislike it. Most sysadmins and DBAs are horrible at coding because they dislike it.
People gravitate to what interests them and a disinterested person does a much poorer job than
an interested one. DevOps aims to combine roles by removing barriers, but there are costs to quality
that no one likes to talk about. Using your hierarchy example most doctors could obtain their
RN but they would not make good nurses.
This is an excellent article on the general concepts of DevOps and the DevOps movement. It helps
to identify the cultural shifts required to facilitate proper DevOps implementations. I also write
about DevOps.. I authored a book on implementing CI, CD and DevOps related functions within an
organization and it was recently published. The book is aptly titled Mastering Jenkins (
http://www.masteringjenkins... ) and aims to codify not only the architectural implementations
and requirements of DevOps but the cultural shift needed to propery advocate for the proper adoption
of DevOps practices. Let me know what you think.
I agree. Although I'm not in the business (yet), I will be soon. What I've noticed just playing
around with Vagrant and Chef, Puppet, Ansible is the great amount of time to try and master just
one of these provisioners. I can't imagine being responsible for all these roles you spoke of
in the article. How can one possibly master all of them, and be good at any of them?
hmmm.... users & business see as one application.... for them how it was developed, deployed does
not matter.... IT is an enabler by definition... so DevOps is mostly about that... giving one
view to the customer; quick changes, stable changes, stable application....
Frankly, DevOps is not about developers or testers... it is about the right architecture, right
framework... developers/testers anyways do what is there in the script... DevOps is just a new
scripts to them.
For right DevOps, you need right framework. architecture for the whole of the program; you
need architecture which is built end to end and not in silos...
Software Developer write code that business/customer use
Test Developer write test code to test SUT
Release Developer write code to automate release process
Infrastructure developer write code to create infrastructure automatically.
Performance Developer writes code to performance test the SUT
Security Developer writes code to scan the SUT for security
Database Developer write code for DB
So which developer are you thinking DevOps going to kill?
Today's TDD world, a developer (it could be anyone above) needs to get out of their comfort
zone to makes sure they write a testable, releasable, deployable, performable, security complaint
and maintainable code.
DevOps brings all this roles together to collaborate and deliver.
Why wouldn't they be? What are the basic responsibilities that make for a passable DBA and which
of those responsibilities cannot be done by a good developer? Say a good developer has just average
experience writing stored procs, analyzing query performance, creating (or choosing not to create,
for performance reasons) indexes, constraints and triggers, configuring database access rights,
setting up regular backups, regular maintenance (ex. rebuilding indexes to avoid fragmentation)...
just to name a few.
I'm sure there's several responsibilities that DBA's have that developers
would have very little to no experience in, but we're talking about making for a passable DBA.
Developers may not be as good at the job as someone who specializes in it for a living, but the
author's wording seems to have been chosen very carefully.
Yup, I see lots of people trying to defend the DBA as a thing, just like people keep
trying to defend the traditional sysadmin as a thing. I started my career as a sysadmin in the 90s,
but times have changed and I don't call myself a sysadmin anymore, because that's not what I do.
Now I'm a Systems Engineer/SRE. My mode of working isn't slamming software together, but engineering
automation to do it for me.
But I also do QA, Data storage performance analysis, networking, and [have a] deep knowledge of the applications
I support.
"... The Emergence of the "DevOps' DevOp", a pseudo intellectual loudly spewing theories about distantly unrelated fields that are entirely irrelevant and are designed to make them feel more intelligent and myself more inadequate ..."
"... "The Copenhagen interpretation certainly applies to DevOps" ..."
"... "I'm modeling the relationship between Dev and Ops using quantum entanglement, with a focus on relative quantum superposition - it's the only way to look at it. Why aren't you?" ..."
"... Enterprise Architects. They used to talk about the "Enterprise Continuum". Now they talk about "The Delivery Continuum" or the "DevOps Continuum". ..."
DevOps and I sort of have a love/hate relationship. DevOps is near and dear to our heart here at
UpGuard and there are plenty
of things that I love about it . Love it or hate it, there is little doubt that it is here to
stay. I've enjoyed a great deal of success thanks to agile software development and DevOps methods,
but here are 10 things I hate about DevOps!
#1 Everyone thinks it's about Automation.
#2 "True" DevOps apparently have no processes - because DevOps takes care of that.
#3 The Emergence of the "DevOps' DevOp", a pseudo intellectual loudly spewing theories about
distantly unrelated fields that are entirely irrelevant and are designed to make them feel more intelligent
and myself more inadequate:
"The Copenhagen interpretation certainly applies to DevOps"
"I'm modeling the relationship between Dev and Ops using quantum entanglement, with a focus
on relative quantum superposition - it's the only way to look at it. Why aren't you?"
#4 Enterprise Architects. They used to talk about the "Enterprise Continuum". Now they talk
about "The Delivery Continuum" or the "DevOps Continuum". How about talking about the business
guys?
#5 Heroes abound with tragic statements like "It took 3 days to automate everything.. it's great
now!" - Clearly these people have never worked in a serious enterprise.
#6 No-one talks about Automation failure...it's everywhere. i.e Listen for the words "Pockets
of Automation". Adoption of technology, education and adaptation of process is rarely mentioned (or
measured).
#7 People constantly pointing to Etsy, Facebook & Netflix as DevOps. Let's promote the stories
of companies that better represent the market at large.
#8 Tech hipsters discounting, or underestimating, Windows sysadmins. There are a lot of them and
they better represent the Enterprise than many of the higher profile blowhards.
#9 The same hipsters saying their threads have filled up with DevOps tweets where there were none
before.
#10 I've never heard of a Project Manager taking on DevOps. I intend on finding one.
What do you think - did I miss anything? Rants encouraged ;-) Please add your comments.
"... DevOps. The latest software development fad. ..."
"... Continuous Delivery (CD), the act of small, frequent, releases was defined in detail by Jez Humble and Dave Farley in their book – Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation. The approach makes a lot of sense and encourages a number of healthy behaviors in a team. ..."
"... The problem is we now have teams saying they're doing DevOps. By that they mean is they make small, frequent, releases to production AND the developers are working closely with the Ops team to get things out to production and to keep them running. ..."
"... Well, the problem is the name. We now have a term "DevOps" to describe the entire build, test, release approach. The problem is when you call something DevOps anyone who doesn't identify themselves as a dev or as Ops automatically assumes they're not part of the process. ..."
DevOps. The latest software development fad. Now you can be Agile, use Continuous Delivery, and believe
in DevOps.
Continuous Delivery (CD), the act of small, frequent, releases was defined in detail by Jez Humble
and Dave Farley in their book – Continuous Delivery: Reliable Software Releases Through Build, Test,
and Deployment Automation. The approach makes a lot of sense and encourages a number of healthy
behaviors
in a team. For example, frequent releases more or less have to be small. Small releases are easier
to understand, which in turn increases our chances of building good features, but also our chances
of testing for the right risks. If you do run into problems during testing then it's pretty easy
to work out the change that caused them, reducing the time to debug and fix issues.
Unfortunately, along with all the good parts of CD we have a slight problem. The book focused
on the areas which were considered to be the most broken, and unfortunately that led to the original
CD description implying "Done" meant the code was shipped to production. As anyone who has ever worked
on software will know, running code in production also requires a fair bit of work.
So, teams started adopting CD but no one was talking about how the Ops team fitted into the release
cycle. Everything from knowing when production systems were in trouble, to reliable release systems
was just assumed to be fully functional, and unnecessary for explanation.
To try to plug the gap DevOps rose up.
Now, just to make things even more confusing. Dave Farley later said that not talking about Ops
was an omission and CD does include the entire development and release cycle, including running in
production. So DevOps and CD have some overlap there.
DevOps does take a slightly different angle on the approach than CD. The emphasis for DevOps is
on the collaboration rather than the process. Silos should be actively broken down to help developers
understand systems well enough to be able to write good, robust and scalable code.
So far so good.
The problem is we now have teams saying they're doing DevOps. By that they mean is they make small,
frequent, releases to production AND the developers are working closely with the Ops team to get
things out to production and to keep them running.
Sounds good. So what's the problem?
Well, the problem is the name. We now have a term "DevOps" to describe the entire build, test,
release approach. The problem is when you call something DevOps anyone who doesn't identify themselves
as a dev or as Ops automatically assumes they're not part of the process.
Seriously, go and ask your designers what they think of DevOps. Or how about your testers. Or
Product Managers. Or Customer Support.
And that's a problem.
We've managed to take something that is completely dependant on collaboration, and trust, and
name it in a way that excludes a significant number of people. All of the name suggestions that arise
when you mention this are just ridiculous. DevTestOps? BusinessDevTestOps? DesignDevOps? Aside from
just being stupid names these continue to exclude anyone who doesn't have these words in their title.
So do I hate DevOps? Well no, not the practice. I think we should always be thinking about how
things will actually work in production. We need an Ops team to help us do that so it makes total
sense to have them involved in the process. Just take care with that name.
Is there a solution? Well, in my mindwe're still talking about collaboration above all else. Thinking
about CD as "Delivery on demand" also makes more sense to me. We, the whole team, should be ready
to deliver working software to the customer when they want it. By being aware of the confusion, and
exclusion that some of these names create we can hopefully bring everyone into the project before
it's too late.
DevOps initiatives include a range of technologies and methodologies spanning the software delivery
process. IT leaders and DevOps practitioners should proactively understand the readiness and capabilities
of technology to identify the most appropriate choices for their specific DevOps initiative.
Table of Contents
Analysis
What You Need to Know
The Hype Cycle
The Priority Matrix
On the Rise
DevOps Toolchain Orchestration
DevOps Toolchain
Mobile DevOps
Web-Scale Operations
Continuous Delivery
Lean IT
User and Entity Behavior Analytics
Continuous Experience
Application Release Automation
Mediated APIs
At the Peak
Web-Scale Development
Container Management
Web-Scale Application Architecture
Microservices
Crowdtesting
Enterprise-Class Agile Development
Software-Defined Data Center
Continuous Configuration Automation
Behavior-Driven Development
Sliding Into the Trough
Citizen Developers
Configuration Auditing
Software-Defined Networking
Application Testing Services
Climbing the Slope
Application Performance Monitoring Suites
Test Data Management
Appendixes
Hype Cycle Phases, Benefit Ratings and Maturity Levels
The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win (2013) is the third
book by Gene Kim. The business novel tells the story of an IT manager who has ninety days to rescue
an over-budget and late IT initiative, code-named The Phoenix Project. The book was co-authored by
Kevin Behr and George Spafford and published by IT Revolution Press in January 2013.[1][2]
Background
The novel is thought of as the modern day version of The Goal by Eliyahu M. Goldratt.[3] The novel
describes the problems that almost every IT organization faces, and then shows the practices of how
to solve the problems, improve the lives of those who work in IT and be recognized for helping the
business win.[1] The goal of the book is to show that a truly collaborative approach between IT and
business is possible.[4]
Synopsis
The novel tells the story of Bill, the IT manager at Parts Unlimited.[4][5][6] The company's new
IT initiative, code named Phoenix Project, is critical to the future of Parts Unlimited, but the
project is massively over budget and very late. The CEO wants Bill to report directly to him and
fix the mess in ninety days or else Bill's entire department will be outsourced. With the help of
a prospective board member and his mysterious philosophy of The Three Ways, Bill starts to see that
IT work has more in common with manufacturing plant work than he ever imagined. With the clock ticking,
Bill must organize work flow, streamline interdepartmental communications, and effectively serve
the other business functions at Parts Unlimited.[7][8]
Reception
The book has been called a "must read" for IT professionals and quickly reached #1 in its Amazon.com
categories.[9][10] The Phoenix Project was featured on 800 CEO Reads Top 25: What Corporate America
Is Reading for June, 2013.[11] InfoQ stated, "This book will resonate at one point or another with
anyone who's ever worked in IT."[4] Jeremiah Shirk, Integration & Infrastructure Manager at Kansas
State University, said of the book: "Some books you give to friends, for the joy of sharing a great
novel. Some books you recommend to your colleagues and employees, to create common ground. Some books
you share with your boss, to plant the seeds of a big idea. The Phoenix Project is all three."[4]
Other reviewers were more skeptical, including the IT Skeptic "Fictionalising allows you to paint
an idealised picture, and yet make it seem real, plausible... Sorry but it is all too good to be
true... none of the answers are about people or culture or behaviour. They're about tools and techniques
and processes." [12] Jez Humble (author of Continuous Delivery) said "unlike real life, there aren't
many experiments in the book that end up making things worse..."
In a recent webinar, XebiaLabs VP of DevOps Strategy Andrew Phillips sat down with Atos Global
Thought Leader in DevOps Dick van der Sar to separate the facts from the fiction. Their findings:
most myths come attached with a small piece of fact and vice versa.
1. DevOps Is Developers Doing Operations: Myth
An integral part of DevOps' automation component involves a significant amount of code. This causes
people to believe Developers do most of the heavy lifting in the equation. In reality, what ends
up happening is due to the amount of infrastructure as Code, Ops begin to look a lot like Dev.
2. Projects Are Dead: Myth
Projects are an ongoing process of evolving systems and failures. To think they can just be handed
off to maintenance forever after completion is simply incorrect. This is only true for tightly scoped
software needs, including systems built for specific events. When you adopt DevOps and Agile, you
are replacing traditional project-based approaches with a focus on product lifecycles.
3. DevOps Doesn't Work in Complex Environments: Myth
DevOps is actually made to thrive in complex environments. The only instance in which it doesn't
work is when unrealistic and/or inappropriate goals are set for the enterprise. Complex environments
typically suffer due to lack of communication about the state of, and changes to, the interconnected
systems. DevOps, on the other hand, encourages communication and collaboration that prevent these
issues from arising.
4. It's Hard to Sell DevOps to the Business: Myth
The benefits to DevOps are closely tied benefiting the business. However, that's hard to believe
when you pitch adopting DevOps as a plan to "stop working on features and sink a lot of your money
into playing with shiny new IT tech." Truth is, DevOps is going to impact the entire enterprise.
This may be the source of resistance, but as long as you find the balance between adoption and disruption,
you will experience a successful transition.
5. Agile Is for Lazy Engineers: Myth
DevOps prides itself on eliminating unnecessary overhead. Through automation, your enterprise
can see a reduction in documentation, meetings, and even manual tasks, giving team members more time
to focus on more important priorities. You know your team is running successfully if their productivity
increases.
Nonetheless, DevOps does not come without its own form of "boring" processes, including test plans
or code audits. Agile may eliminate waste but that doesn't include the tedious yet necessary aspects.
6. If You Can't Code, You Have No Chance in DevOps: Fact
This is only afact because the automation side of DevOps is all Infrastructure as Code (IaC).
This typically requires some sort of software development skill such as modularization, automated
testing, and Continuous Integration (CI) as IaC. Regardless of scale, automating anything will require,
at the very least, software development skills.
7. Managers Disappear: Myth
Rather than disappear, managers take a different role with DevOps. In fact, they are still a necessity
to the team. Managers are tasked with the responsibility of keeping the entire DevOps team on track.
Classic management tasks may seem to disappear but only because the role is changing to be more focused
on empowerment.
8. DevOps or Die: Fact!
Many of today's market leaders already have some sort of advanced DevOps structure in place. As
industries incorporate IT further into their business, we will begin to see DevOps as a basic necessity
to the modern business and those that can't adapt will simply fall behind.
That being said, you shouldn't think of DevOps as the magic invincibility potion that will keep
your enterprise failure free. Rather, DevOps can prevent many types of failure, but there will always
be environment specific threats unique to every organization that DevOps can't rescue you from.
Discover how to optimize your DevOps workflows with our cloud-based automated testing infrastructure,
brought to you in partnership with
Sauce Labs .
Out of these misunderstandings several common myths
have been created. Acceptance of these myths misleads
business further.
Here are some of the most common myths and the facts
that debunk them.
Myth 1: DevOps needs agile.
Although DevOps and agile are terms frequently used
together, they are a long way away from being
synonymous with one another. Agile development refers
to a method of software delivery that builds software
incrementally, whereas DevOps refers not only to a
method of delivery but to a culture, which when
adopted, results in
many business benefits
, including faster software
delivery.
DevOps processes can help to compliment agile
development, but it is not reliant on agile and can
support a range of operation models such as
Waterfall – where build processes can be optimised and accelerated, and automation can be
implemented.
Agile – where heightened communication between
development and operations increases end-product
quality.
Hybrid approach – where speed, quality and
compliance are all increased.
For optimum results, full adoption of the DevOps
philosophy is necessary.
Myth 2: DevOps can't work with legacy.
DevOps is often regarded as a modern concept that
helps forward-thinking businesses innovate. Although
this is true, it can also help those organisations with
long-established, standard IT practices. In fact, with
legacy applications there are usually big advantages to
DevOps adoption.
Managing legacy care and bringing new software to
market quickly; blending stability and agility, is a
frequently encountered problem in this new era of
digital transformation. Bi-modal IT is an approach
where Mode 1 refers to legacy systems focussed on
stability, and Mode 2 refers to agile IT focussed on
rapid application delivery. DevOps principles are often
included exclusively within Mode 2, but automation and
collaboration can also be used with success within Mode
1 to increase delivery speed whilst ensuring stability.
Myth 3: DevOps is only for continuous delivery.
DevOps doesn't (necessarily) imply continuous
delivery. The aim of a DevOps culture is to increase
the delivery frequency of an organisation, often from
quarterly/monthly to daily releases or more, and
improve their ability to respond to changes in the
market.
While continuous delivery relies heavily on
automation and is aimed at
agile and lean thinking
organisations, unlike
DevOps it is not reliant on a shared culture which
enhances collaboration. Gartner summed up the
distinction with a report that stated that: "DevOps is
not a market, but a tool-centric philosophy that
supports a continuous delivery value chain."
Myth 4: DevOps requires new tools.
As with the implementation of any new concept or
idea, a common misconception about
DevOps
adoption
is that new toolsets, and skills are
required. Though the provision of appropriate and
relevant tools can aid adoption, organisations are by
no means required to replace tools and processes they
use to produce software.
DevOps enables organisations to deliver new
capabilities more easily, and bring new software into
production more rapidly in order to respond to market
changes. It is not strictly reliant on new tools to get
this job done.
Myth 5: DevOps is a skill.
The rapid growth of the DevOps movement has resulted
in huge demand for professionals who are skilled within
the methodology. However, this fact is often
misconstrued to suggest that DevOps is itself a skill –
this is not the case.
DevOps is a culture – one that needs to be fully
adopted throughout an entire organisation for optimum
results, and one that is best supported with
appropriate and relevant tools.
Myth 6: DevOps is software.
Understanding that DevOps adoption can be better
facilitated with software is important, however, maybe
more so is understanding that they are not one and the
same. Although it is true that there is a significant
amount of DevOps software available on the market
today, purchasing a specific ad-hoc DevOps product, or
even suite of products, will not make your business
'DevOps'.
The DevOps methodology is the communication,
collaboration and automation of your development and
operations functions, and as described above, is
required to be adopted by an entire organisation to
achieve optimum results. The software and tools
available will undoubtedly reduce the strain of
adoption on your business but conscious adoption is
required for your business to fully reach the potential
that DevOps offers.
Conclusion
Like any new and popular term, people have somewhat
confused and sometimes contradictory or partial
impressions of what DevOps is and how it works.
DevOps is a philosophy which enables businesses to
automate their processes and work more collaboratively
to achieve a common goal and deliver software more
rapidly.
At
VASSIT
we
help organisations to successfully implement DevOps,
click here
to learn how we made DevOps a reality at
TSB bank Sabadell
"... In just the past month, the Valley has seemed like it's happily living in some sort of sadomasochistic bubble worthy of a bad Hollywood satire. ..."
It has been said that Silicon Valley, or the 50 or so square-mile area extending from San Francisco
to the base of the peninsula, has overseen the creation of more wealth than any place in the history
of mankind. It's made people richer than the oil industry; it has created more money than the Gold
Rush. Silicon chips, lines of code, and rectangular screens have even minted more wealth than religious
wars.
Wealthy societies, indeed, have their own complicated incentive structures and mores. But they
do often tend, as any technological entrepreneur will be quick to remind you, to distribute value
across numerous income levels, in a scaled capacity. The Ford line, for instance, may have eventually
minted some serious millionaires in Detroit, but it also made transportation cheaper, helped drive
down prices on countless consumer goods, and facilitated new trade routes and commercial opportunities.
Smartphones, or any number of inventive modern apps or other software products, are no different.
Sure, they throw off a lot of money to the geniuses who came up with them, and the people who got
in at the ground floor. But they also make possible innumerable other opportunities, financial and
otherwise, for their millions of consumers.
Silicon Valley is, in its own right, a dynasty. Instead of warriors or military heroes, it has
nerds and people in half-zip sweaters. But it is becoming increasingly likely that the Valley might
go down in history not only for its wealth, but also for creating more tone deaf people than any
other ecosystem in the history of the world.
In just the past month, the Valley has seemed like it's happily living in some sort of sadomasochistic
bubble worthy of a bad Hollywood satire. Uber has endured a slate of scandals that would have
seriously wounded a less culturally popular company (or a public one, for that matter). There was
one former employee's allegation of
sexual harassment (which the company reportedly investigated); a report of
driver manipulation ; an unpleasant video depicting C.E.O. Travis Kalanick furiously berating
an Uber driver; a story about secret software that could
subvert regulators ; a report of
cocaine use and groping at holiday parties (an offending manager was fired within hours of the
scandal); a lawsuit for potentially buying
stolen software from a competitor;
more
groping ; a slew of
corporate exits ; and a
driverless car
crash . (The shit will really hit the fan if it turns out that Uber's self-driving technology
was
misappropriated from Alphabet's Waymo; Uber has called the lawsuit "baseless.")
Then there was Facebook, which held its developer conference while the Facebook Killer was on
the loose. As Mat Honan of BuzzFeed
put it so eloquently: "People used to talk about
Steve Jobs
and Apple's
reality distortion field . But Facebook, it sometimes feels, exists in a reality hole. The company
doesn't distort reality-but it often seems to lack the ability to recognize it."
And we ended the week with the ultimate tone-deaf statement from the C.E.O. of Juicero, the maker
of a
$700 dollar-soon-reduced-to-$400 dollar juicer that has $120 million in venture backing. After
Bloomberg News discovered that you didn't even need the $700-$400 juicer to make juice (there are,
apparently, these
things called hands ) the company's chief executive, Jeff Dunn , offered a
response
on Medium insinuating that he gets up every day to make the world a better place.
Of course, not everyone who makes the pilgrimage out West is, or becomes, a jerk. Some people
arrive in the Valley with a philosophy of how to act as an adult. But here's the problem with that
group: most of them don't vociferously articulate how unsettled they are by the bad actors. Even
when journalists manage to cover these atrocious activities, the powers of Silicon Valley try to
ridicule them, often in public. Take, for example, the 2015 TechCrunch Disrupt conference, when a
reporter asked billionaire investor Vinod Kholsa -who evidently believes that
public beaches should belong to rich people -about some of the ethical controversy surrounding
the mayonnaise-disruption startup Hampton Creek (I can't believe I just wrote the words "mayonnaise-disruption").
Khosla responded with a
trite and
rude retort that the company was fine. When the reporter pressed Khosla, he shut him down by
saying, "I know a lot more about how they're doing, excuse me, than you do." A year later and the
Justice Department opened a criminal investigation into whether the company
defrauded
investors when employees secretly purchased the company's own mayonnaise from grocery stores
. (The Justice Department has since dropped its investigation.)
When you zoom out of that 50-square-mile area of Silicon Valley, it becomes obvious that big businesses
can get shamed into doing the right thing. When it was discovered that Volkswagen lied about emissions
outputs, the company's C.E.O.
was forced to resign . The same was true for
the chief of Wells Fargo , who was embroiled in a financial scandal. In the wake of it's recent
public scandal, United recently
knocked its C.E.O. down a peg . Even Fox News, one of the most bizarrely unrepentant media outlet
in America, pushed out
two of the most important people at the network over allegations of sexual harassment. ( Bill
O'Reilly has said that claims against him are "unfounded"; Roger Ailes has vociferously denied allegations
of sexual harassment.) Even Wall Street can (sometimes) be forced to be more ethical. Yet Elizabeth
Holmes is still C.E.O. of Theranos.
Travis
Kalanick is still going to make billions of dollars as the chief of Uber when the company eventually
goes public.
The list
goes on and on .
In many respects, this is simply the D.N.A. of Silicon Valley. The tech bubble of the mid-90s
was inflated by lies that sent the NASDAQ on a
vertiginous downward spike that eviscerated the life savings of thousands of retirees and Americans
who believed in the hype. This time around, it seems that some of these business may be real, but
the people running them are still as tone deaf regarding how their actions affect other people. Silicon
Valley has indeed created some amazing things. One can only hope these people don't erase it with
their hubris.
E-commerce start-up Fab was once valued at $900 million, a near unicorn in Silicon Valley terms.
But after allegedly burning through $200 million of its $336 million in venture capital, C.E.O. Jason
Goldberg was forced to shutter its European arm and lay off two-thirds of its staff.
Fired in 2014 from his ad-tech firm RadiumOne following a domestic-violence conviction, Gurbaksh
Chahal founded a new company to compete with the one he was kicked out of. But Gravity4, his new
firm, was sued for gender discrimination in 2015, though that case is still pending, and former employees
have contemplated legal action against him.
In theory, you could reduce the size of sda1, increase the size of the extended
partition, shift the contents of the extended partition down, then increase the
size of the PV on the extended partition and you'd have the extra room. However, the
number of possible things that can go wrong there is just astronomical, so I'd recommend either
buying a second hard drive (and possibly transferring everything onto it in a more sensible
layout, then repartitioning your current drive better) or just making some bind mounts of various
bits and pieces out of /home into / to free up a bit more space.
If destination does not exist it behaves as rename command but if destination exists and is directory it move it one level up
For example, if you have directories /home and home2 and want to move all subdirectories from /home2 to /home
and the directory /home is empty you can't use
mv home2 home
if you forget to remove the directory /home, mv silently will create /home/home2 directory and you have a problem if
this is user home directories.
-p -- Preserve the characteristics of the source_file. Copy the contents, modification times, and permission modes
of the source_file to the destination files.
You might wish to create an alias
alias cp='cp -p'
as I can't imagine case where regular Unix behaviour is desirable.
Very interesting discussion of how the project of mass surveillance of internet traffic started
and what were the major challenges. that's probably where the idea of collecting "envelopes" and correlating
them to create social network. Similar to what was done in civil War.
The idea to prevent corruption of medical establishment to prevent Medicare fraud is very interesting.
Notable quotes:
"... I suspect that it's hopelessly unlikely for honest people to complete the Police Academy; somewhere early on the good cops are weeded out and cannot complete training unless they compromise their integrity. ..."
"... 500 Years of History Shows that Mass Spying Is Always Aimed at Crushing Dissent It's Never to Protect Us From Bad Guys No matter which government conducts mass surveillance, they also do it to crush dissent, and then give a false rationale for why they're doing it. ..."
"... People are so worried about NSA don't be fooled that private companies are doing the same thing. ..."
"... In communism the people learned quick they were being watched. The reaction was not to go to protest. ..."
"... Just not be productive and work the system and not listen to their crap. this is all that was required to bring them down. watching people, arresting does not do shit for their cause ..."
"People who believe in these rights very much are forced into compromising their integrity"
I suspect that it's hopelessly unlikely for honest people to complete the Police Academy; somewhere
early on the good cops are weeded out and cannot complete training unless they compromise their
integrity.
500 Years of History Shows that Mass Spying Is Always Aimed at Crushing Dissent It's Never to Protect Us From Bad Guys No matter which government conducts mass surveillance,
they also do it to crush dissent, and then give a false rationale for why they're doing it.
I am wondering how much damage your spying did to the Foreign Countries, I am wondering how
you changed regimes around the world, how many refugees you helped to create around the world.
Don Kantner, 2 weeks ago
People are so worried about NSA don't be fooled that private companies are doing the same
thing. Plus, the truth is if the NSA wasn't watching any fool with a computer could potentially
cause an worldwide economic crisis.
Bettor in Vegas 1 year ago
In communism the people learned quick they were being watched. The reaction was not to go to
protest.
Just not be productive and work the system and not listen to their crap. this is all that was
required to bring them down. watching people, arresting does not do shit for their cause......
ShellCheck
is a static
analysis tool that shows warnings and
suggestions concerning bad code in bash/sh
shell scripts. It can be used in several
ways: from the web by pasting your shell
script in an online editor (Ace – a
standalone code editor written in
JavaScript) in
https://www.shellcheck.net
(it is always
synchronized to the latest git commit, and
is the simplest way to give ShellCheck a go)
for instant feedback.
Alternatively, you
can install it on your machine and run it
from the terminal, integrate it with your
text editor as well as in your build or test
suites.
There are three things ShellCheck does
primarily:
It points out and explains typical
beginner's syntax issues that cause a
shell to give cryptic error messages.
It points out and explains typical
intermediate level semantic problems
that cause a shell to behave strangely
and counter-intuitively.
It also points out subtle caveats,
corner cases and pitfalls that may cause
an advanced user's otherwise working
script to fail under future
circumstances.
In this article, we will show how to
install and use ShellCheck in the various
ways to find bugs or bad code in your shell
scripts in Linux.
How to Install and Use ShellCheck in
Linux
ShellCheck
can be easily
installed locally through your package
manager as shown.
Once ShellCheck installed, let's take a
look at how to use ShellCheck in the various
methods we mentioned before.
Using ShellCheck From the Web
Go to
https://www.shellcheck.net
and paste
your script in the Ace editor provided, you
will view the output at the bottom of the
editor as shown in the screen shot below.
In the following example, the test shell
script consists of the following lines:
From the screenshot above, the first two
variables
E_NOTROOT
and
E_MINARGS
have been
declared but are unused, ShellCheck reports
these as "suggestive errors":
SC2034: E_NOTROOT appears unused. Verify it or export it.
SC2034: E_MINARGS appears unused. Verify it or export it.
Then secondly, the wrong name (in the
statement
echo $E_NONROOT
)
was used to
echo variable E_NOTROOT
,
that is why ShellCheck shows the error:
SC2153: Possible misspelling: E_NONROOT may not be assigned, but E_NOTROOT is
Again when you look at the
echo commands
, the variables have not
been double quoted (helps to prevent
globbing and word splitting), therefore
Shell Check shows the warning:
SC2086: Double quote to prevent globbing and word splitting.
Using ShellCheck From the Terminal
You can also run ShellCheck from the
command-line, we'll use the same shell
script above as follows:
$ shellcheck test.sh
ShellCheck – Checks Bad Code in Shell
Scripts
Using ShellCheck From the Text Editor
You can also view
ShellCheck
suggestions and warnings directly in a
variety of editors, this is probably a more
efficient way of using ShellCheck, once you
save a files, it shows you any errors in the
code.
In
Vim
, use ALE or
Syntastic (we will use this):
Start by installing
Pathogen
so that it's easy to install syntastic. Run
the commands below to get the
pathogen.vim
file and the
directories it needs:
Once you have installed pathogen, and you
now can put syntastic into
~/.vim/bundle
as follows:
# cd ~/.vim/bundle && git clone --depth=1 https://github.com/vim-syntastic/syntastic.git
Next, close vim and start it back up to
reload it, then type the command below:
:Helptags
If all goes well, you should have
ShellCheck
integrated with
Vim
, the following screenshots show
how it works using the same script above.
Check Bad Shell Script Code in Vim
In case you get an error after following
the steps above, then you possibly didn't
install
Pathogen
correctly.
Redo the steps but this ensure that you did
the following:
Created both the
~/.vim/autoload
and
~/.vim/bundle
directories.
Added the execute pathogen#infect()
line to your
~/.vimrc
file.
Did the git clone of syntastic
inside
~/.vim/bundle
.
Use appropriate permissions to
access all of the above directories.
You can also use other editors to check
bad code in shell scripts like:
That's it! In this article, we showed how
to install and use
ShellCheck
to finds bugs or bad code in your shell
scripts in Linux. Share your thoughts with
us via the comment section below.
Do you know of any other similar tools
out there? If yes, then share info about
them in the comments as well.
Share
+
0
9
16
If You Appreciate What We Do Here On TecMint,
You Should Consider:
Aaron Kili is a Linux and
F.O.S.S enthusiast, an upcoming Linux SysAdmin, web
developer, and currently a content creator for
TecMint who loves working with computers and
strongly believes in sharing knowledge.
Your name can also be listed here. Got a tip?
Submit it here
to become an
TecMint
author.
Paul Krugman Gets Retail Wrong: They are Not Very Good Jobs
Paul Krugman used his column * this morning to ask why we don't pay as much attention to the
loss of jobs in retail as we do to jobs lost in mining and manufacturing. His answer is that in
large part the former jobs tend to be more white and male than the latter. While this is true,
although African Americans have historically been over-represented in manufacturing, there is
another simpler explanation: retail jobs tend to not be very good jobs.
The basic story is that jobs in mining and manufacturing tend to offer higher pay and are far
more likely to come with health care and pension benefits than retail jobs. A worker who loses
a job in these sectors is unlikely to find a comparable job elsewhere. In retail, the odds are
that a person who loses a job will be able to find one with similar pay and benefits.
A quick look at average weekly wages ** can make this point. In mining the average weekly wage
is $1,450, in manufacturing it is $1,070, by comparison in retail it is just $555. It is worth
mentioning that much of this difference is in hours worked, not the hourly pay. There is nothing
wrong with working shorter workweeks (in fact, I think it is a very good idea), but for those
who need a 40 hour plus workweek to make ends meet, a 30-hour a week job will not fit the bill.
This difference in job quality is apparent in the difference in separation rates by industry.
(This is the percentage of workers who lose or leave their job every month.) It was 2.4 percent
for the most recent month in manufacturing. By comparison, it was 4.7 percent in retail, almost
twice as high. (It was 5.2 percent in mining and logging. My guess is that this is driven by logging,
but I will leave that one for folks who know the industry better.)
Anyhow, it shouldn't be a mystery that we tend to be more concerned about the loss of good
jobs than the loss of jobs that are not very good. If we want to ask a deeper question, as to
why retail jobs are not very good, then the demographics almost certainly play a big role.
Since only a small segment of the workforce is going to be employed in manufacturing regardless
of what we do on trade (even the Baker dream policy will add at most 2 million jobs), we should
be focused on making retail and other service sector jobs good jobs. The full agenda for making
this transformation is a long one (higher minimum wages and unions would be a big part of the
picture, along with universal health care insurance and a national pension system), but there
is one immediate item on the agenda.
All right minded people should be yelling about the Federal Reserve Board's interest rate hikes.
The point of these hikes is to slow the economy and reduce the rate of job creation. The Fed's
concern is that the labor market is getting too tight. In a tighter labor market workers, especially
those at the bottom of the pecking order, are able to get larger wage increases. The Fed is ostensibly
worried that this can lead to higher inflation, which can get us to a wage price spiral like we
saw in the 70s.
As I and others have argued, *** there is little basis for thinking that we are anywhere close
to a 1970s type inflation, with inflation consistently running below the Fed's 2.0 percent target,
(which many of us think is too low anyhow). I'd love to see Krugman pushing the cause of full
employment here. We should call out racism and sexism where we see it, but this is a case where
there is a concrete policy that can do something to address it. Come on Paul, we need your voice.
PK: Consider what has happened to department stores. Even as Mr. Trump was boasting about saving
a few hundred jobs in manufacturing here and there, Macy's announced plans to close 68 stores
and lay off 10,000 workers. Sears, another iconic institution, has expressed "substantial doubt"
about its ability to stay in business.
Overall, department stores employ a third fewer people now than they did in 2001. That's half
a million traditional jobs gone - about eighteen times as many jobs as were lost in coal mining
over the same period.
And retailing isn't the only service industry that has been hit hard by changing technology.
Another prime example is newspaper publishing, where employment has declined by 270,000, almost
two-thirds of the work force, since 2000. ...
(To those that had them, they were probably
pretty decent jobs, albeit much less 'gritty'
than mining or manufacturing.)
There is a lot of elitism to go around. People will be much more reluctant to express publicly
the same as in private (or pseudonymously on the internet?). But looking down on other people
and their work is pretty widespread (and in either case there is a lot of assumption about the
nature of the work and the personal attributes of the people doing it - usually of a derogatory
type in both cases).
I find it plausible that Krugman was referring those widespread stereotypes about job categories
that (traditionally?) have not required a college degree, or have been relatively at the low end
of the esteem scale in a given industry (e.g. in "tech" and manufacturing, QA/testing related
work).
It must be possible to comment on such stereotypes, but there is of course always the risk
of being thought to hold them oneself, or indeed being complicit in perpetuating them.
As a thought experiment, I suggest reviewing what you yourself think about occupations not
held by yourself, good friends, and family members and acquaintainces you like/respect (these
qualifications are deliberate). For example, you seem to think not very highly of maids.
Of course, being an RN requires significantly more training than being a maid, and not just
once when you start in your career. But at some level of abstraction, anybody who does work where
their autonomy is quite limited (i.e. they are not setting objectives at any level of the organization)
is "just a worker". That's the very stereotype we are discussing, isn't it?
Krugman thinks nurses are the equivalent of maids...
[ The problem is that Paul Krugman dismissed the work of nurses and maids and gardeners as
"menial." I find no evidence that Krugman understands that even after conditionally apologizing
to nurses. ]
"... It isn't. It's the world's biggest, most advanced cloud-computing company with an online retail storefront stuck between you and it. In 2005-2006 it was already selling supercomputing capability for cents on the dollar - way ahead of Google and Microsoft and IBM. ..."
"... Do you really think the internet created Amazon, Snapchat, Facebook, etc? No, the internet was just a tool to be used. The people who created those businesses would have used any tool they had access to at the time because their original goal was not automation or innovation, it was only to get rich. ..."
"... "Disruptive parasitic intermediation" is superb, thanks. The entire phrase should appear automatically whenever "disruption"/"disruptive" or "innovation"/"innovative" is used in a laudatory sense. ..."
"... >that people have a much bigger aversion to loss than gain. ..."
"... As the rich became uber rich, they hid the money in tax havens. As for globalization, this has less to do these days with technological innovation and more to do with economic exploitation. ..."
+100 to your comment. There is a decided attempt by the plutocrats to get us to focus our anger
on automation and not the people, like they themselves, who control the automation ..
Plutocrats control much automation, but so do thousands of wannabe plutocrats whose expertise
lets them come from nowhere to billionairehood in a few short years by using it to create some
novel, disruptive parasitic intermediation that makes their fortune. The "sharing economy" relies
on automation. As does Amazon, Snapchat, Facebook, Dropbox, Pinterest,
It's not a stretch to say that automation creates new plutocrats . So blame the individuals,
or blame the phenomenon, or both, whatever works for you.
So John D. Rockefeller and Andrew Carnegie weren't plutocrats–or were somehow better plutocrats?
Blame not individuals or phenomena but society and the public and elites who shape it. Our
social structure is also a kind of machine and perhaps the most imperfectly designed of all of
them. My own view is that the people who fear machines are the people who don't like or understand
machines. Tools, and the use of them, are an essential part of being human.
I'm replying to your upthread comment which seems to say today's careless campers and the technology
they rely on are somehow different from those other figures we know so well from history. In fact
all technology is tremendously disruptive but somehow things have a way of sorting themselves
out. So–just to repeat–the thing is not to "blame" the individuals or the automation but to get
to work on the sorting. People like Jeff Bezos with his very flaky business model could be little
more than a blip.
Automation? Those companies? I guess Amazon automates ordering not exactly R. Daneel
Olivaw for sure. If some poor Asian girl doesn't make the boots or some Agri giant doesn't make
the flour Amazon isn't sending you nothin', and the other companies are even more useless.
'Automation? Those companies? I guess Amazon automates ordering not exactly R. Daneel Olivaw
for sure.'
Um. Amazon is highly deceptive, in that most people think it's a giant online retail store.
It isn't. It's the world's biggest, most advanced cloud-computing company with an online retail
storefront stuck between you and it. In 2005-2006 it was already selling supercomputing capability
for cents on the dollar - way ahead of Google and Microsoft and IBM.
Do you really think the internet created Amazon, Snapchat, Facebook, etc? No, the internet
was just a tool to be used. The people who created those businesses would have used any tool they
had access to at the time because their original goal was not automation or innovation, it was
only to get rich.
Let me remind you of Thomas Edison. If he would have lived 100 years later, he would have used
computers instead of electricity to make his fortune. (In contrast, Nikolai Tesla/George Westinghouse
used electricity to be innovative, NOT to get rich ). It isn't the tool that is used, it is the
mindset of the people who use the tool
"Disruptive parasitic intermediation" is superb, thanks. The entire phrase should appear
automatically whenever "disruption"/"disruptive" or "innovation"/"innovative" is used in a laudatory
sense.
100% agreement with your first point in this thread, too. That short comment should stand as a
sort of epigraph/reference for all future discussion of these things.
No disagreement on the point about actual and wannabe plutocrats either, but perhaps it's worth
emphasising that it's not just a matter of a few successful (and many failed) personal get-rich-quick
schemes, real as those are: the potential of 'universal machines' tends to be released in the
form of parasitic intermediation because, for the time being at least, it's released into a world
subject to the 'demands' of capital, and at a (decades-long) moment of crisis for the traditional
model of capital accumulation. 'Universal' potential is set free to seek rents and maybe to
do a bit of police work on the side, if the two can even be separated.
The writer of this article from 2010 [
http://www.metamute.org/editorial/articles/artificial-scarcity-world-overproduction-escape-isnt
] surely wouldn't want it to be taken as conclusive, but it's a good example of one marginal
train of serious thought about all of the above. See also 'On Africa and Self-Reproducing Automata'
written by George Caffentzis 20 years or so earlier [https://libcom.org/library/george-caffentzis-letters-blood-fire];
apologies for link to entire (free, downloadable) book, but my crumbling print copy of the single
essay stubbornly resists uploading.
Unfortunately, the healthcare insurance debate has been simply a battle between competing ideologies.
I don't think Americans understand the key role that universal healthcare coverage plays in creating
resilient economies.
Before penicillin, heart surgeries, cancer cures, modern obstetrics etc. that it didn't matter
if you are rich or poor if you got sick. There was a good chance you would die in either case
which was a key reason that the average life span was short.
In the mid-20th century that began to change so now lifespan is as much about income as anything
else. It is well known that people have a much bigger aversion to loss than gain. So if you currently
have healthcare insurance through a job, then you don't want to lose it by taking a risk to do
something where you are no longer covered.
People are moving less to find work – why would you uproot your family to work for a company
that is just as likely to lay you off in two years in a place you have no roots? People are less
likely to day to quit jobs to start a new business – that is a big gamble today because you not
only have to keep the roof over your head and put food on the table, but you also have to cover
an even bigger cost of healthcare insurance in the individual market or you have a much greater
risk of not making it to your 65th birthday.
In countries like Canada, healthcare coverage is barely a discussion point if somebody is looking
to move, change jobs, or start a small business.
If I had a choice today between universal basic income vs universal healthcare coverage, I
would choose the healthcare coverage form a societal standpoint. That is simply insuring a risk
and can allow people much greater freedom during the working lives. Similarly, Social Security
is of similar importance because it provides basic protection against disability and not starving
in the cold in your old age. These are vastly different incentive systems than paying people money
to live on even if they are not working.
Our ideological debates should be factoring these types of ideas in the discussion instead
of just being a food fight.
>that people have a much bigger aversion to loss than gain.
Yeah well if the downside is that you're dead this starts to make sense.
>instead of just being a food fight.
The thing is that the Powers-That-Be want it to be a food fight, as that is a great stalling
at worst and complete diversion at best tactic. Good post, btw.
As the rich became uber rich, they hid the money in tax havens. As for globalization, this
has less to do these days with technological innovation and more to do with economic exploitation.
I will note that Germany, Japan, South Korea, and a few other nations have not bought into
this madness and have retained a good chunk of their manufacturing sectors.
'As for globalization, this has less to do these days with technological innovation and more
to do with economic exploitation.'
Economic exploiters are always with us. You're underrating the role of a specific technological
innovation. Globalization as we now know it really became feasible in the late 1980s with the
spread of instant global electronic networks, mostly via the fiberoptic cables through which everything
- telephony, Internet, etc - travels Internet packet mode.
That's the point at which capital could really start moving instantly around the world, and
companies could really begin to run global supply chains and workforces. That's the point when
shifts of workers in facilities in Bangalore or Beijing could start their workdays as shifts of
workers in the U.S. were ending theirs, and companies could outsource and offshore their whole
operations.
Anything that IMF claim should be taken with a grain of salt. IMF is a quintessential
neoliberal institutions that will support neoliberalism to
the bitter end.
Drivers of Declining Labor Share of Income
By Mai Chi Dao, Mitali Das, Zsoka Koczan, and Weicheng Lian
Technology: a key driver in advanced economies
In advanced economies, about half of the decline in labor
shares can be traced to the impact of technology. The decline
was driven by a combination of rapid progress in information
and telecommunication technology, and a high share of
occupations that could be easily be automated.
Global integration-as captured by trends in final goods
trade, participation in global value chains, and foreign
direct investment-also played a role. Its contribution is
estimated at about half that of technology. Because
participation in global value chains typically implies
offshoring of labor-intensive tasks, the effect of
integration is to lower labor shares in tradable sectors.
Admittedly, it is difficult to cleanly separate the impact
of technology from global integration, or from policies and
reforms. Yet the results for advanced economies is
compelling. Taken together, technology and global integration
explain close to 75 percent of the decline in labor shares in
Germany and Italy, and close to 50 percent in the United
States.
Brad said: Few things can turn a perceived threat into a graspable opportunity like a high-pressure
economy with a tight job market and rising wages. Few things can turn a real opportunity into
a phantom threat like a low-pressure economy, where jobs are scarce and wage stagnant because
of the failure of macro economic policy.
What is it that prevents a statement like this from succeeding at the level of policy?
"... Of course after legacy systems [people] were retrenched or shown the door in making government more efficient MBA style, some
did hit the jack pot as consultants and made more that on the public dime . but the Gov balance sheet got a nice one time blip. ..."
"... In the government, projects "helped" by Siemens, especially at the Home and Passport Offices, cost billions and were abandoned.
At my former employer, an eagle's nest, it was Deloittes. At my current employer, which has lost its passion to perform, it's KPMG and
EY helping. ..."
"... My personal favourite is Accenture / British Gas . But then you've also got the masterclass in cockups Raytheon / U.K. Border
Agency . Or for sheer breadth of failure, there's the IT Programme That Helped Kill a Whole Bank Stone Dead ( Infosys / Co-op ). ..."
"... I am an assembler expert. I have never seen a job advertised, but a I did not look very hard. Send me your work!!! IBM mainframe
assembler ..."
"... What about Computer Associates? For quite a while they proudly maintained the worst reputation amongst all of those consultancy/outsourcing
firms. ..."
"... My old boss used to say – a good programmer can learn a new language and be productive in it in in space of weeks (and this
was at the time when Object Oriented was the new huge paradigm change). A bad programmer will write bad code in any language. ..."
"... The huge shortcoming of COBOL is that there are no equivalent of editing programs. ..."
"... Original programmers rarely wrote handbooks ..."
"... That is not to say that it is impossible to move off legacy platforms ..."
After we've been writing about the problem of the ticking time bomb of bank legacy systems written in COBOL that depends on a shrinking
pool of aging programmers to baby them for now nearly two years, Reuters reports on the issue. Chuck L flagged a Reuters story, Banks
scramble to fix old systems as IT 'cowboys' ride into sunset, which made some of the points we've been making but frustratingly missed
other key elements.
Here's what Reuters confirmed:
Banks and the Federal government are running mission-critical core systems on COBOL, and only a small number of older software
engineers have the expertise to keep the systems running . From the article:
In the United States, the financial sector, major corporations and parts of the federal government still largely rely on it
because it underpins powerful systems that were built in the 70s or 80s and never fully replaced
Experienced COBOL programmers can earn more than $100 an hour when they get called in to patch up glitches, rewrite coding
manuals or make new systems work with old.
For their customers such expenses pale in comparison with what it would cost to replace the old systems altogether, not to
mention the risks involved.
Here's what Reuters missed:
Why young coders are not learning COBOL . Why, in an era when IT grads find it hard to get entry-level jobs in the US, are young
programmers not learning COBOL as a guaranteed meal ticket? Basically, it's completely uncool and extremely tedious to work with
by modern standards. Given how narrowminded employers are, if you get good at COBOL, I woudl bet it's assumed you are only capable
of doing grunt coding and would never get into the circles to work on the fantasy of getting rich by developing a hip app.
I'm sure expert readers will flag other issues, but the huge shortcoming of COBOL is that there are no equivalent of editing programs.
Every line of code in a routine must be inspected and changed line by line.
How banks got in this mess in the first place. The original sin of software development is failure to document the code. In fairness,
the Reuters story does allude to the issue:
But COBOL veterans say it takes more than just knowing the language itself. COBOL-based systems vary widely and original programmers
rarely wrote handbooks, making trouble-shooting difficult for others.
What this does not make quite clear is that given the lack of documentation, it will always be cheaper and lower risk to have
someone who is familiar with the code baby it, best of all the guy who originally wrote it. And that means any time you bring someone
in, they are going to have to sort out not just the code that might be causing fits and starts, but the considerable interdependencies
that have developed over time. As the article notes:
"It is immensely complex," said [former chief executive of Barclays PLC Anthony] Jenkins, who now heads startup 10x Future
Technologies, which sells new IT infrastructure to banks. "Legacy systems from different generations are layered and often heavily
intertwined."
I had the derivatives trading firm O'Connor & Associates as a client in the early 1990s. It was widely recognized as being one
of the two best IT shops in all of Wall Street at the time. O'Connor was running the biggest private sector Unix network in the world
back then. And IT was seen as critical to the firm's success; half of O'Connor's expenses went to it.
Even with it being a huge expense, and the my client, the CIO, repeatedly telling his partners that documenting the code would
save 20% over the life of the software, his pleas fell on deaf ears. Even with the big commitment to building software, the trading
desk heads felt it was already taking too long to get their apps into production. Speed of deployment was more important to them
than cost or long-term considerations. 1 And if you saw this sort of behavior with a firm where software development was
a huge expense for partners who were spending their own money, it's not hard to see how managers in a firm where the developers were
much less important and management was fixated on short term earnings targets to blow off tradeoff like this entirely.
Picking up sales patter from vendors, Reuters is over-stating banks' ability to address this issue . Here is what Reuters would
have you believe:
The industry appears to be reaching an inflection point, though. In the United States, banks are slowly shifting toward newer
languages taking cue from overseas rivals who have already made the switch-over.
Commonwealth Bank of Australia, for instance, replaced its core banking platform in 2012 with the help of Accenture and software
company SAP SE. The job ultimately took five years and cost more than 1 billion Australian dollars ($749.9 million).
Accenture is also working with software vendor Temenos Group AG to help Swedish bank Nordea make a similar transition by 2020.
IBM is also setting itself up to profit from the changes, despite its defense of COBOL's relevance. It recently acquired EzSource,
a company that helps programmers figure out how old COBOL programs work.
The conundrum is the more new routines banks pile on top of legacy systems, the more difficult a transition becomes. So delay
only makes matters worse. Yet the incentives of everyone outside the IT areas is to hope they can ride it out and make the legacy
system time bomb their successor's problem.
If you read carefully, Commonwealth is the only success story so far. And it's vastly less complex than that of many US players.
First, it has roughly A$990 billion or $740 billion in assets now. While that makes it #46 in the world (and Nordea is of similar
size at #44 as of June 30, 2016), JP Morgan and Bank of America are three times larger. Second, and perhaps more important, they
are the product of more bank mergers. Commonwealth has acquired only four banks since the computer era. Third, many of the larger
banks are major capital markets players, meaning their transaction volume relative to their asset base and product complexit is also
vastly greater than for a Commonwealth. Finally, it is not impossible that as a government owned bank prior to 1990 that not being
profit driven, Commonwealth's software jockeys might have documented some of the COBOL, making a transition less fraught.
Add to that that the Commonwealth project was clearly a "big IT project". Anything over $500 million comfortably falls into that
category. The failure rate on big IT projects is over 50%; some experts estimate it at 80% (costly failures are disguised as well
as possible; some big IT projects going off the rails are terminated early).
Mind you, that is not to say that it is impossible to move off legacy platforms. The issue is the time and cost (as well as risk).
One reader, I believe Brooklyn Bridge, recounted a prototypical conversation with management in which it became clear that the cost
of a migration would be three times a behemoth bank's total profit for three years. That immediately shut down the manager's interest.
Estimates like that don't factor in the high odds of overruns. And even if it is too high for some banks by a factor of five,
that's still too big for most to stomach until they are forced to. So the question then becomes: can they whack off enough increments
of the problem to make it digestible from a cost and risk perspective? But the flip side is that the easier parts to isolate and
migrate are likely not to be the most urgent to address.
____ 1 The CIO had been the head index trader and had also help build O'Connor's FX derivatives trading business, so he was
well aware of the tradeoff between trading a new instrument sooner versus software life cycle costs. He was convinced his partners
were being short-sighted even over the near term and had some analyses to bolster that view. So this was the not empire-building
or special pleading. This was an effort at prudent management.
Accenture is also working with software vendor Temenos Group AG to help
and promptly splurted my coffee over my desk. "Help" is the last thing either of these two ne'redowells will be doing.
Apart from the problems ably explained in the above piece, I'm tempted to think industry PR and management gullibility to it
are the two biggest risks.
Heaps of IT upgrades have gone a bit wonky over here of late, Health care payroll, ATO, Centerlink, Census, all assisted by
private software vendors and consultants – after – drum roll .. PR management did a "efficiency" drive [by].
Of course after legacy systems [people] were retrenched or shown the door in making government more efficient MBA style,
some did hit the jack pot as consultants and made more that on the public dime . but the Gov balance sheet got a nice one time
blip.
disheveled . nice self licking icecream cone thingy and its still all gov fault . two'fer
It's the same in the UK as Clive knows and can add.
In the government, projects "helped" by Siemens, especially at the Home and Passport Offices, cost billions and were abandoned.
At my former employer, an eagle's nest, it was Deloittes. At my current employer, which has lost its passion to perform, it's
KPMG and EY helping.
What I have read / heard is that the external consultants often cost more and will take longer to do the work than internal
bidders. The banks and government(s) run an internal market and invite bids.
They keep writing books on how to avoid this sort of thing. Strangely enough, none of them ever tell CEOs or CIOs to pay people
decent wages, not treat them like crap and to train up new recruits now and again. And also fail to highlight that though you
might like to believe you can go into the streets in Mumbai, Manila or Shenzhen waving a dollar bill and have dozens of experienced,
skilled and loyal developers run to you like a cat smelling catnip, that may only be your wishful thinking.
Just wait 'til we get started trying to implement Brexit
Oh man, if you only had a look at the kind of graduates Infosys hires en masse and the state of graduate programmers coming
out of universities here in India you'd be amazed how we still haven't had massive hacks. And now the government, so confident
in the Indian IT industry's ability to make big IT systems is pushing for the universal ID system(aadhar) to be made mandatory
for even booking flight tickets!
So would you recommend graduates do learn COBOL to get good jobs there in the USA?
I'd pick something really obscure, like maybe MUMPS
- yes, incredibly niche but that's the point, you can corner a market. You might not get oodles of work but what you do get
you can charge the earth for. Getting real-world experience is tricky though.
Another alternative, a little more mainstream is assembler. But that is hideous. You deserve every penny if you can learn that
and be productive in it.
For a bit more on why Cobol is hard to use see Why We Hate Cobol
. To summarise, Cobol is barely removed from programming in assembler, i.e. at the lowest level of abstraction, with endless
details needing to be taken care of. It dates pack to the punched card era.
It is particularly hard for IT grads who have learned to code in Java or C# or any modern language to come to grips with, due
to the lack of features that are usually taken for granted. Those who try to are probably on their own due to a shortage of teachers/courses.
It's a language that's best mastered on the job as a junior in a company that still uses it, so it's hard to get it on your CV
before landing such a job.
There are potentially two types of career opportunities for those who invest the time to get up-to-speed on Cobol. The first
is maintenance and minor extension of legacy Cobol applications. The second and potentially more lucrative one is developing an
ability to understand exactly what a Cobol program does in order to craft a suitable replacement in a modern enterprise grade
language.
Well, COBOL's shortcomings are part technical and part "religious". After almost fifty years in software, and with experience
in many of the "modern enterprise grade languages", I would argue that the technical and business merits are poorly understood.
There is an enormous pressure in the industry to be on the "latest and greatest" language/platform/framework, etc. And under such
pressure to sell novelty, the strengths of older technologies are generally overlooked.
@Yves, I would be glad to share my viewpoint (biases, warts and all) at your convenience. I live nearby.
"It is particularly hard for IT grads who have learned to code in Java or C# or any modern language to come to grips with"
which tells you something about the quality of IT education these days, where "mastering" a language is more often more important
than actually understanding what goes on and how.
My old boss used to say – a good programmer can learn a new language and be productive in it in in space of weeks (and
this was at the time when Object Oriented was the new huge paradigm change). A bad programmer will write bad code in any language.
IMHO, your old boss is wrong about that. Precisely because OO languages are a huge paradigm change and require a programmer
to nearly abandon everything he/she knows about programming. Then get his brain around OOP patterns when designing a complex system.
Not so easy.
As proof, I put forth the 30% success rate for new large projects in the latter 90s done with OOP tech. Like they say, if it
was easy, everyone would be doing it.
More generally, on the subject of Cobol vs Java or C++/C#, in the heyday of OOPs rollout in the early 90s, corporate IT spent
record amounts on developing new systems. As news of the Y2K problem spread, they very badly wanted to replace old Cobol/mainframe
legacy systems. As things went along, many of those plans got rolled back due to perceived problems with viability, cost and trained
personnel.
Part of the reason was existing Cobol IT staff took a look at OOP, then at their huge pile of Cobol legacy code and their brains
melted down. I was around lots of them and they had all the symptoms of Snow Crash. [Neil Stephenson] I hope they got better.
It never occurred to me that the OOP-lite character of the newer "hipster" languages (Golang / Go or even plain old javascript)
are a response to OOP run amok.
In the university course I took, we were taught Algol-60. Then it turned out that the univ. had no budget for Algol compiles
for us. So we wrote our programs in Algol-60 for 'publication' and grading, and rewrote them in FORTRAN IV to run in a cheap bulk
FORTRAN execution system for results. Splendid way to push home Turing's point that all computing is the same. So when the job
needed COBOL, "Sure, bring it on."
My old boss used to say – a good programmer can learn a new language and be productive in it in in space of weeks (and this
was at the time when Object Oriented was the new huge paradigm change). A bad programmer will write bad code in any language.
Yes. Learning a new programming language is fairly easy but understanding existing patchwork code can be very hard indeed.
It just gets harder if you want to make reliable changes.
HR thinking, however, demands "credentials" and languages get chosen as such based on their simple labels. They are searchable
on L**kedIn!
A related limitation is the corporate aversion to spending any time or money on employee learning of either language or code.
There may not be anyone out there with all the skills needed but that will not stop managers from trying to hire them or, better
still, just outsourcing the whole mess.
Your boss was correct in my opinion - but also atypical. Most firms look for multi-years of experience in a language. They'll
toss your resume if you don't show you've used it extensively.
Even if a new coder spent the time to learn COBOL, if he wasn't using it on the job or in pretty significant projects he would
not be considered. And there aren't exactly many open source projects out there written in COBOL to prove one's competence. The
limiting factor is not whether you "know" COBOL, or whether you know how to learn it. The limiting factor is the actual knowledge
of the system, how it was implemented, and all the little details that never get written down no matter how good your documentation.
If your system is 30+ years old it has complexity hidden in every nook and cranny.
As for the language itself, COBOL is an ancient language from a much older paradigm than what students learn in school today.
Most students skip right past C, they don't learn structural programming. They expect to have extensive libraries of pre-written
routines available for reuse. And they expect to work in a modern IDE (development environment), a software package that makes
it much easier to write and debug code. COBOL doesn't have tools of this level.
When I was in the Air Force I was trained as a programmer. COBOL was one of the languages they "taught". I never used it, ever,
and wouldn't dream of trying it today. It's simply too niche. I would never recommend anyone learn COBOL in the hopes of getting
a job. Get the job first, and if it happens to include some COBOL get the expertise that way.
having seen the 'high level code' in C++, not sure what makes it 'modern'.its really an out growth of C, which is basically
the assembler language of Unix. which it self is no spring chicken. mostly what is called 'modern' is just the latest fad, has
the highest push from vendors. and sadly what we see in IT, is that the IT trade magazines are more into what they sell, that
what companies need (maybe because of advertising?)
as to why schools tend to teach these languages than others? mainly cause its hip. its also cheaper for the schools, as they
dont have much in the way of infrastructure to teach them ( kids bring their own computers). course teachers are as likely to
be influenced by the latest 'shinny;' thing as any one else
C++ shares most of the core C spec but that's it. [variables and scope, datatypes, functions sorta, math and logic operatives,
logic control statements] The reason you can read high level C++ is because it uses objects that hide the internal code and are
given names that describe their use which if done right makes the code somewhat readable, along with a short comment header, and
self documenting.
Then at high level most code is procedural and/or event driven, which makes it appear to function like C or any other procedural
language. Without the Goto statements and subroutines, because that functionality is now encapsulated within the C++ objects.
{which are a datatype that combines data structures and related functions that act on this data)
Well put. I was going to make this point. Note that the today's IT grads struggle with Cobol for the same reason that modern
airline pilots would struggle to build their own airplane. The industry has evolved and become much more specialized, and standard
'solved' problems have migrated into the core toolsets and become invisible to developers, who now work at a much higher level
of abstraction. So for example a programmer who learned using BASIC on a Commodore 64 probably knows all about graphics coding
by direct addressing of screen memory, which modern programmers would consider unnecessary at best and dangerous at worst. Not
to mention it's exhausting drudgery compared to working with modern toolsets.
The other reason more grads don't learn COBOL is because it's a sunset technology. This is true even if systems written in
COBOL are mission critical and not being replaced. As more and more COBOL programmers retire or die, banks will eventually reach
the point where they don't have enough skilled staff available to keep their existing systems running. If they are in a position
where they have to fix things anyway, for example due to a critical failure, they will be forced to resort to cross-training other
developers, at great expense and pain for all concerned, and with no guarantee of success. One or two of these experiences will
be enough to convince them that migration is necessary, whatever the cost (if their business survives them, which isn't a given
when it comes to critical failures involving out of date and poorly-understood technology). And while developers with COBOL skills
will be able to name their own price during those events, it's not likely to be a sustainable working environment in the longer
term.
It would take a significant critical mass of younger programmers deciding to learn COBOL to change this dynamic. One person
on their own isn't going to make any difference, and it's not career advice I would ever give to a young graduate looking to enter
IT.
I am an experienced developer who has worked with a lot of different languages, including some quite low level ones in my early
days. I don't know COBOL, but I am confident that I could learn it well enough to perform code archaeology on it given enough
time (although probably nowhere near as efficiently as someone who built a career on it). Whether I could be convinced to do so
is another question. If you paid me never-need-to-work-again money, then maybe. But nobody is ever going to do that unless it's
a crisis, and I'm not likely to sign up for a death march situation with my current family commitments.
"Experienced COBOL programmers can earn more than $100 an hour"
Then the people hiring are getting them dirt cheap. This is a lot closer to consulting than contracting–a very specialized
skill set and only a small set of people available. The rate should be $200-300/hour.
I wonder if it has something to do with the IRS rules that made that guy fly a plane into an IRS office? Because of the rules,
programmers aren't allowed to work as independent consultants. Since their employer/middleman takes a huge cut the pay they receive
is a lot lower. Coders with a security clearance make quite a bit but that requires an "in", getting the clearance in the first
place which most employers won't pay for.
you're right. I've seen it on cluckny databases in a clothing firm in NY State, a seed and grain distribution facility in Minnesota
and a bank in Minneapolis. They're horrible and Yves is right – documentation is completely ABSENT
No different than the failure of the public sector to maintain dams, bridges and highways. Basic civil engineering but our
business model never included maintenance nor replacement costs. That is because our business model is accounting fraud.
I grew up on Fortran, and Cobol isn't too different, just limited to 2 points past the decimal to the right. I feel so sorry
for these code jockies who can't handle a bit of drudgery, who can't do squat without a gigabyte routine library to invoke. Those
languages as scripting languages or report writers back in the old days.
Please hire another million Indian programmers they don't mind being poorly paid or the drudgery. Americans and Europeans are
so over-rated. Business always complains they can't hire the right people some job requires 2 PhDs and we can't pay more than
$30k, am I right? Business needs slaves, not employees.
This was a "new payroll" system for school teachers in NZ. It was an ongoing disaster. If something as simple (?) as paying
NZ teachers could turn into such a train-wreck, imagine what updating the software of the crooked banks could entail. I bet that
there are secret frauds hidden in the ancient software, like the rat mummies and cat skeletons that one finds when lifting the
floor of old houses.
"Novopay is a web-based payroll system for state and state integrated schools in New Zealand, processing the pay of 110,000
teaching and support staff at 2,457 schools .. From the outset, the system led to widespread problems with over 8,000 teachers
receiving the wrong pay and in some cases no pay at all; within a few months, 90% of schools were affected .."
"Many of the errors were described as 'bizarre'. One teacher was paid for 39 days, instead of 39 hours getting thousands of
dollars more than he should have. Another teacher was overpaid by $39,000. She returned the money immediately, but two months
later, had not been paid since. A relief teacher was paid for working at two different schools on the same day – one in Upper
Hutt and the other in Auckland. Ashburton College principal, Grant McMillan, said the 'most ludicrous' problem was when "Novopay
took $40,000 directly out of the school bank account to pay a number of teachers who had never worked at the college".
"but the huge shortcoming of COBOL is that there are no equivalent of editing programs. Every line of code in a routine must
be inspected and changed line by line"
I'm not sure what you mean by this.
If you mean that COBOL doesn't have the new flash IDEs that can do smart things with "syntactic sugar", then it really depends
on the demand. Smart IDEs can be written for pretty much any languages (smart IDEs work by operating on ASTs, which are part and
parcel of any compiler. The problem is more of what to do if you have an externalised functions etc, which is for example why
it took so long for those smart IDEs to work with C++ and its linking model). The question is whether it pays – and a lot of old
COBOL hands eschew anything except for vi (or equivalent) because coding should be done by REAL MEN.
On the general IT problem. There are three problems, which are sort of related but not.
The first problem is the interconnectedness of the systems. Especially for a large bank, it's not often clear where one system
ends and the other begins, what are the side-effects of running something (or not running), who exactly produces what outputs
and when etc. The complexity is more often at this level than cobol (or any other) line-by-line code.
The second problem is the IT personell you get. If you're unlucky, you get coding monkeys, who barely understand _any_ programming
language (there was time I didn't think people like that get hired. I now know better), and have no idea what analytical and algorithmic
thinking is. If you're lucky, you get a bunch of IT geeks, who can discuss the latest technology till cows come home, know the
intricate details of what a sequence point in C++ is and how it affects execution, but don't really care that much about the business.
Then you get some possibly even brilliant code, but often also get unnecessary technological artifacts and new technologies just
because they are fun – even though a much simpler solution would work just as well if not better. TBH, you can get this from the
other side too, someone who understands the business but doesn't know even basic language techniques, which generally means their
code works very well for the business, but is a nightmare to maintain (a typical population of this groups are front office quants).
If you are incredibily lucky, you get someone who understands the business and happens to know how to code well too. Unfortunately,
this is almost a mythical beast, especially since neitehr IT nor the business encourage people to understand each other.
Which is what gets me to the thirds point – politics of it. And that's, TBH, is why most projects fail. Because it's easier
to staff a project with 100 developers and then say all that could have been done was done, than get 10 smart people working on
it, but risk that if it fails you get told you haven't spent enough resources. "We are not spending enough money" is paradoxically
one of the "problems" I often see here, when the problem really is "we're not spending money smartly enough". Because in an organization
budget=power. I have yet to see an IT project that would have 100+ developers that would _really_ succeed (as opposed to succeed
by redefining what it was to deliver to what was actually delivered).
Oh, and last point, on the documentation. TBH, documentation of the code is superfluous if a) it's clear what business problem
is being solved b) has a good set of test cases c) the code is reasonably cleanly written (which tends to be the real problem).
Documenting code by anything else but example is in my experience just a costly exercise. Mind you, this is entirely different
from documenting how systems hang together and how their interfaces work.
On the last point, I have to tell you I in short succession happened to work not just with O'Connor, but about a year later,
with Bankers Trust, then regarded as the other top IT shop on Wall Street. Both CIOs would disagree with you vehemently on your
claim re documentation.
Yes, in 90s there was a great deal of emphasis on code documentation. The problem with that is that the requirements in real
world change really quick. Development techniques that worked for sending the man to the moon don't really work well on short-cycle
user driven developments.
90s was mostly the good old waterfall method (which was really based on the NASA techniques), but even as early as 2000s it
started to change a lot. Part of it come from the realization that the "building" metaphor that was the working approach for a
lot of that didn't really work for code.
When you're building a bridge, it's expensive, so you have to spend a lot of time with blueprints etc. When you're doing code,
documenting it in "normal" human world just adds a superfluous step. It's much more efficient to make sure your code is clean
and readable than writing extra documents that tell you what the code does _and_ have to be kept in sync all the time.
Moreover, bits like pretty pictures showing the code interaction, dependencies and sometimes even more can now be generated
automatically from the code, so again, it's more efficient to do that than to keep two different versions of what should be the
same truth.
With all due respect, O'Connor and Bankers Trust were recognized at top IT shops then PRECISELY because they were the best,
bar none, at "short cycle user driven developments." They were both cutting edge in derivatives because you had to knock out the
coding to put new complex derivatives into production.
Don't insinuate my clients didn't know what they were talking about. They were running more difficult coding environments than
you've ever dealt with even now. The pace of derivative innovation was torrid then and there hasn't been anything like it since
in finance. Ten O'Connor partners made $1 billion on the sale of their firm, and it was entirely based on the IT capabilities.
That was an unheard of number back then, 1993, particularly given the scale of the firm (one office in Chicago, about 250 employees).
I can't talk about how good/bad your clients were except for generic statements – and the above were generic statements that
in 90s MOST companies used waterfall.
At the same time please do not talk about what programming environments I was in, because you don't know. That's assuming it's
even possible to compare coding environments – because quant libraries that first and foremost concentrate on processing data
(and I don't even know it's what was the majority of your clients code) is a very very different beast from extremely UI complex
but computationally trivial project, or something that has both trivial UI and computation but is very database heavy etc. etc.
I don't know what specific techniques your clients used. But the fact they WANTED to have more documentation doesn't mean that
having more documentation would ACTUALLY be useful.
With all due respect, I've spent the first half of 00s talking to some of the top IT development methodologists of the time,
from the Gang Of Four people to Agile Manifesto chaps, and practicing/leading/implementing SW development methodology across a
number of different industries (anything from "pure" waterfall to variants of it to XP).
The general agreement across the industry was (and I believe still is) that documenting _THE CODE_ (outside of the code) was
waste of time (actually it was ranging from any design doc to various levels of design doc, depending on who you were talking
to).
Again, I put emphasis on the code – that is not the same as say having a good whitepaper telling you how the model you're implementing
works, or what the hell the users actually want – i.e. capturing the requirements.
As an aside – implementation of new derivative payoffs can be actually done in a fairly trivial way, depending on how exactly
you model them in the code. I've wrote an extensive library that did it, whose whole purpose was to deal with new products and
allow them to be incubated quickly and effectively – and that most likely involved doing things that no-one at BT/O'Conner even
looked at in early 1990s (because XVA wasn't even gleam in anyone's eye at that time).
Well at my TBTF, where incomprehensible chaos rules, the only thing - and I do mean the only thing - that keeps major disasters
averted (perhaps "ameliorated" is putting it better) is where some of the key systems are documented. Most of the core back end
is copiously and reasonably well documented and as such can survive a lot of mistreatment at the hands of the current outsourcer
de jour.
But some "lower priority" applications are either poorly documented or not documented at all. And a "low priority" application
is only "low priority" until it happens to sit on the critical path. Even now I have half of Bangalore (it seems so, at any rate)
sitting there trying to reverse engineer some sparsely documented application - although I suspect there was documentation, it
just got "lost" in a succession of handovers - desperate in their attempts to figure out what the application does and how it
does it. You can hear the fear in their voices, it is scary stuff, given how crappy-little-VB6-pile-of-rubbish is now the only
way to manage a key business process where there are no useable comments in the code and no other application documentation, you
are totally, totally screwed.
It seems like you guys are talking past each other to some degree. I get the sense that vlade is talking about commenting code,
and dismissing the idea of code comments that don't live with the code. Yves' former colleagues are probably referring to higher
level specifications that describe the functionality, requirements, inputs, and outputs of the various software modules in the
system.
If this is the case, then you're both right. Even comments in the code can tend to get out of date due to application of bug fixes,
and other reasons for 'drift' in the code, unless the comments are rigorously maintained along wth the code. Were the code-level
descriptions maintained somewhere else, that would be much more difficult and less useful. On the other hand the higher-level
specifications are pretty essential for using, testing, and maintaining the software, and would sure be useful for someone trying
to replace all or parts of the system.
In my experience you need a combination of both. There is simply no substitute for a brief line in some ghastly nested if/then
procedure that says "this section catches host offline exceptions if the transaction times out and calls the last incremental
earmarked funds as a fallback" or what-have-you.
That sort of thing can save weeks of analysis. It can stop an outage from escalating from a few minutes to hours or
even days.
There is some problem-solving/catastrophe-avoiding discussion about setting up a new bank with a clean, updated (i.e., this
millennium) IT approach and then merging the old bank into that and decommissioning that old one. Many questions arise about applicable
software both in-house and at all those vendor shops that would need some inter-connectivity.
Legacy systems lurk all over the economy, from banks to utilities to government and education. The O'Connor CIO advice relating
to life-cycle costing was probably unheard in many places besides
The Street.
building them from scratch is usually the most likely to be a failure as to many in both IT and business only know parts of
the needs. and if a company cant implement a vendor supplied package to do the work, what makes us think they can do it from scratch
I did learn COBOL when I was at the University more than three decades ago, and at that time it was already decidedly "uncool".
The course, given by an old-timer, was great though. I programmed in COBOL in the beginnings of my professional life (MIS applications,
not banking), so I can provide a slightly different take on some of those issues.
As far as the language itself is concerned, disregard those comments about it being like "assembly". COBOL already showed its
age in the 1980s, but though superannuated it is a high-level language geared at dealing with database records, money amounts
(calculations with controlled accuracy), and reports. For that kind of job, it was not that bad.
The huge shortcoming of COBOL is that there are no equivalent of editing programs.
While in the old times a simple text editor was the main tool for programming in that language, modern integrated, interactive
development environments for COBOL have been available for quite a while - just as there are for Java, C++ or C#.
And that is a bit of an issue. For, already in my times, a lot, possibly most COBOL was not programmed manually, but generated
automatically - typically from pseudo-COBOL annotations or functional extensions inside the code. Want to access a database (say
Oracle, DB2, Ingres) from COBOL, or generate a user interface (for 3270 or VT220 terminals in those days), or perform some networking?
There were extensions and code generators for that. Nowadays you will also find coding utilities to manipulate XML or interface
with routines in other programming languages. All introduce deviations and extensions from the COBOL norm.
If, tomorrow, I wanted to apply for a job at one of those financial institutions battling with legacy software, my rusty COBOL
programming skills would not be the main problem, but my lack of knowledge of the entire development environment. That
would mean knowing those additional code generators, development environments, extra COBOL-geared database/UI/networking/reporting
modules. In an IBM mainframe environment, this would probably mean knowing things like REXX, IMS or DB2, CICS, etc (my background
is DEC VMS and related software, not IBM stuff).
So those firms are not holding dear onto just COBOL programmers - they are desperately hoarding people who know their way around
in mainframe programming environments for which training (in Universities) basically stopped in the early 1990s.
Furthermore, I suspect that some of those code generators/interfaces might themselves be decaying legacy systems whose original
developers went out of business or have been slowly withdrawing from their maintenance. Correcting or adjusting manually the COBOL
code generated by such tools in the absence of vendor support is lots of fun (I had to do something like that once, but it actually
went smoothly).
Original programmers rarely wrote handbooks
My experience is that proper documentation has a good chance to be rigorously enforced when the software being developed is
itself a commercial product to be delivered to outside parties. Then, handbooks, reference manuals and even code documentation
become actual deliverables that are part of the product sold, and whose production is planned and budgeted for in software development
programmes.
I presume it is difficult to ensure that effort and resources be devoted to document internal software because these are purely
cost centers - not profit centers (or at least, do not appear as such directly).
That is not to say that it is impossible to move off legacy platforms
So, we knew that banks were too big to fail, too big to jail, and are still too big to bail. Are their software problems too
big to nail?
actually suspect banks like the rest of business dont really care about their systems, till they are down, as they will find
the latest offshore company to do it cheaper.
Why then have I been told that reviewing code for Y2K had to be done line by line?
I said documentation, not handbooks. And you are assuming banks hired third parties to do their development. Buying software
packages and customizing them, as well as greater use of third party vendors, became a common practice only as of the 1990s.
I know it will screw me and people I care about, and "throw the world economy into chaos," but who effing cares (hint: not
me) if the code pile reaches past the limits of its angle of repose, and slumps into some chaotic non-form?
Maybe a sentiment that gets me some abuse, but hey, is it not the gravamen of the story here that dysfunction and then collapse
are very possible, maybe even likely?
And where are the tools to re-build this Tower of Babel, symbol of arrogant pride? Maybe G_D has once again, per the Biblical
story, confounded the tongues of men (and women) to collapse their edifices and reduce them to working the dirt (what's left of
it after centuries of agricultural looting and the current motions toward temperature-driven uninhabitability.)
My first job out of uni, I was trained as a MVS/COBOL programmer. After successfully completing the 11-week pass/fire course,
I showed up to my 1st work assignment where my boss said to me, "Here's your UNIX terminal."
;-) – COBOL didn't strike me as difficult, just arcane and verbose. Converting to SAP is a costly nightmare. That caused to
me to leave a job once had no desire to deal with SAP/ABAP. I'm surprised no one has come up with an acceptable next-gen thing
. I remember years ago seeing an ad for
Object-Oriented-COBOL in
an IT magazine and I almost pissed myself laughing. On the serious side, if it's still that powerful and well represented in Banking,
perhaps someone should look into an upgraded version of the language/concepts and build something easy to lift and shift COBOL++?
This sounds like an opportunity for a worker's coop, to train their workers in COBOL and to get back at these banks by REALLY
exploiting them good and hard.
so is this why no one is willing to advocate regulating derivatives in an accountable way? i almost can't believe this stuff.
i can't believe that we are functioning at all, financially. 80% of IT projects fail? and if legacy platforms are replaced at
great time and expense, years and trillions, what guarantee is there that the new platform will not spin out just as incomprehensibly
as COBOL based software evolved, with simplistic patches of other software lost in translation? And maybe many times faster. Did
Tuttle do this? I think we need new sophisticated hardware, something even Tuttle can't mess with.
I think it is only 80% of 'large' IT projects fail. I think it says more about the lack of scalability of large software projects,
or our (in-) ability to deal with exponential complexity growth
Looks like there are more than a few current NYC jobs at Accenture, Morgan Stanley, JPMorgan Chase, and Bank of America for
programmers who code in COBOL.
Probably automated 200. In every case, displacing 3/4 of the
workers and increasing production 40% while greatly improving
quality. Exact same can be said for larger scaled such as
automobile mfg, ...
The convergence of offshoring and
automation in such a short time frame meant that instead of a
gradual transformation that might have allowed for more
evolutionary economic thinking, American workers got
gobsmacked. The aftermath includes the wage disparity, opiate
epidemic, Trump, ...
This transition is of the scale of the industrial
revolution with climate change thrown. This is just the
beginning of great social and economic turmoil. None of the
stuff that evolved specific the industrial revolution
applies.
No it was policy driven by politics. They increased profits
at the expense of workers and the middle class. The New
Democrats played along with Wall Street.
What do you make of the DeLong link? Why do you avoid discussing it?
"...
The lesson from history is not that the robots should be stopped; it is that we will need to
confront the social-engineering and political problem of maintaining a fair balance of relative
incomes across society. Toward that end, our task becomes threefold.
First, we need to make sure that governments carry out their proper macroeconomic role,
by maintaining a stable, low-unemployment economy so that markets can function properly. Second,
we need to redistribute wealth to maintain a proper distribution of income. Our market economy
should promote, rather than undermine, societal goals that correspond to our values and morals.
Finally, workers must be educated and trained to use increasingly high-tech tools (especially
in labor-intensive industries), so that they can make useful things for which there is still
demand.
Sounding the alarm about "artificial intelligence taking American jobs" does nothing to
bring such policies about. Mnuchin is right: the rise of the robots should not be on a treasury
secretary's radar."
Except that Germany and Japan have retained a larger share of workers in manufacturing, despite
more automation. Germany has also retained much more of its manufacturing base than the US
has. The evidence really does point to the role of outsourcing in the US compared with others.
I got an email of some tale that Adidas would start manufacturing in Germany as opposed to
China. Not with German workers but with robots. The author claimed the robots would cost only
$5.50 per hour as opposed to $11 an hour for the Chinese workers. Of course Chinese apparel
workers do not get anywhere close to $11 an hour and the author was not exactly a credible
source.
"The new "Speedfactory" in the southern town of Ansbach near its Bavarian headquarters will
start production in the first half of 2016 of a robot-made running shoe that combines a machine-knitted
upper and springy "Boost" sole made from a bubble-filled polyurethane foam developed by BASF."
Interesting. I thought that "keds" production was already fully automated. Bright colors
are probably the main attraction. But Adidas commands premium price...
Machine-knitted upper is the key -- robots, even sophisticated one, put additional demands
on precision of the parts to be assembled. That's also probably why monolithic molded sole
is chosen. Kind of 3-D printing of shoes.
Robots do not "feel" the nuances of the technological process like humans do.
While I agree that Chinese workers don't get $11 - frequently employee costs are accounted
at a loaded rate (including all benefits - in China would include capital cost of dormitories,
food, security staff, benefits and taxes). I am guessing that a $2-3 an hour wage would result
in an $11 fully loaded rate under those circumstances. Those other costs are not required with
robuts.
I agree with you. The center-left want to exculpate globalization and outsourcing, or free
them from blame, by providing another explanation: technology and robots. They're not just
arguing with Trump.
Brad Setser:
"I suspect the politics around trade would be a bit different in the U.S. if the goods-exporting
sector had grown in parallel with imports.
That is one key difference between the U.S. and Germany. Manufacturing jobs fell during
reunification-and Germany went through a difficult adjustment in the early 2000s. But over
the last ten years the number of jobs in Germany's export sector grew, keeping the number of
people employed in manufacturing roughly constant over the last ten years even with rising
productivity. Part of the "trade" adjustment was a shift from import-competing to exporting
sectors, not just a shift out of the goods producing tradables sector. Of course, not everyone
can run a German sized surplus in manufactures-but it seems likely the low U.S. share of manufacturing
employment (relative to Germany and Japan) is in part a function of the size and persistence
of the U.S. trade deficit in manufactures. (It is also in part a function of the fact that
the U.S. no longer needs to trade manufactures for imported energy on any significant scale;
the U.S. has more jobs in oil and gas production, for example, than Germany or Japan)."
Probably automated 200. In every case, displacing 3/4 of the workers and increasing production
40% while greatly improving quality. Exact same can be said for larger scaled such as automobile
mfg, ...
The convergence of offshoring and automation in such a short time frame meant that instead
of a gradual transformation that might have allowed for more evolutionary economic thinking,
American workers got gobsmacked. The aftermath includes the wage disparity, opiate epidemic,
Trump, ...
This transition is of the scale of the industrial revolution with climate change thrown. This
is just the beginning of great social and economic turmoil. None of the stuff that evolved
specific the industrial revolution applies.
No it was policy driven by politics. They increased profits at the expense of workers and the
middle class. The New Democrats played along with Wall Street.
APR 3, 2017
Artificial Intelligence and Artificial Problems
by J. Bradford DeLong
BERKELEY – Former US Treasury Secretary Larry Summers
recently took exception to current US Treasury Secretary
Steve Mnuchin's views on "artificial intelligence" (AI) and
related topics. The difference between the two seems to be,
more than anything else, a matter of priorities and emphasis.
Mnuchin takes a narrow approach. He thinks that the
problem of particular technologies called "artificial
intelligence taking over American jobs" lies "far in the
future." And he seems to question the high stock-market
valuations for "unicorns" – companies valued at or above $1
billion that have no record of producing revenues that would
justify their supposed worth and no clear plan to do so.
Summers takes a broader view. He looks at the "impact of
technology on jobs" generally, and considers the stock-market
valuation for highly profitable technology companies such as
Google and Apple to be more than fair.
I think that Summers is right about the optics of
Mnuchin's statements. A US treasury secretary should not
answer questions narrowly, because people will extrapolate
broader conclusions even from limited answers. The impact of
information technology on employment is undoubtedly a major
issue, but it is also not in society's interest to discourage
investment in high-tech companies.
On the other hand, I sympathize with Mnuchin's effort to
warn non-experts against routinely investing in castles in
the sky. Although great technologies are worth the investment
from a societal point of view, it is not so easy for a
company to achieve sustained profitability. Presumably, a
treasury secretary already has enough on his plate to have to
worry about the rise of the machines.
In fact, it is profoundly unhelpful to stoke fears about
robots, and to frame the issue as "artificial intelligence
taking American jobs." There are far more constructive areas
for policymakers to direct their focus. If the government is
properly fulfilling its duty to prevent a demand-shortfall
depression, technological progress in a market economy need
not impoverish unskilled workers.
This is especially true when value is derived from the
work of human hands, or the work of things that human hands
have made, rather than from scarce natural resources, as in
the Middle Ages. Karl Marx was one of the smartest and most
dedicated theorists on this topic, and even he could not
consistently show that technological progress necessarily
impoverishes unskilled workers.
Technological innovations make whatever is produced
primarily by machines more useful, albeit with relatively
fewer contributions from unskilled labor. But that by itself
does not impoverish anyone. To do that, technological
advances also have to make whatever is produced primarily by
unskilled workers less useful. But this is rarely the case,
because there is nothing keeping the relatively cheap
machines used by unskilled workers in labor-intensive
occupations from becoming more powerful. With more advanced
tools, these workers can then produce more useful things.
Historically, there are relatively few cases in which
technological progress, occurring within the context of a
market economy, has directly impoverished unskilled workers.
In these instances, machines caused the value of a good that
was produced in a labor-intensive sector to fall sharply, by
increasing the production of that good so much as to satisfy
all potential consumers.
The canonical example of this phenomenon is textiles in
eighteenth- and nineteenth-century India and Britain. New
machines made the exact same products that handloom weavers
had been making, but they did so on a massive scale. Owing to
limited demand, consumers were no longer willing to pay for
what handloom weavers were producing. The value of wares
produced by this form of unskilled labor plummeted, but the
prices of commodities that unskilled laborers bought did not.
The lesson from history is not that the robots should be
stopped; it is that we will need to confront the
social-engineering and political problem of maintaining a
fair balance of relative incomes across society. Toward that
end, our task becomes threefold.
First, we need to make sure that governments carry out
their proper macroeconomic role, by maintaining a stable,
low-unemployment economy so that markets can function
properly. Second, we need to redistribute wealth to maintain
a proper distribution of income. Our market economy should
promote, rather than undermine, societal goals that
correspond to our values and morals. Finally, workers must be
educated and trained to use increasingly high-tech tools
(especially in labor-intensive industries), so that they can
make useful things for which there is still demand.
Sounding the alarm about "artificial intelligence taking
American jobs" does nothing to bring such policies about.
Mnuchin is right: the rise of the robots should not be on a
treasury secretary's radar.
The Global Rise of Corporate Saving
By Peter Chen, Loukas Karabarbounis, and Brent Neiman
Abstract
The sectoral composition of global saving changed
dramatically during the last three decades. Whereas in the
early 1980s most of global investment was funded by household
saving, nowadays nearly two-thirds of global investment is
funded by corporate saving. This shift in the sectoral
composition of saving was not accompanied by changes in the
sectoral composition of investment, implying an improvement
in the corporate net lending position. We characterize the
behavior of corporate saving using both national income
accounts and firm-level data and clarify its relationship
with the global decline in labor share, the accumulation of
corporate cash stocks, and the greater propensity for equity
buybacks. We develop a general equilibrium model with product
and capital market imperfections to explore quantitatively
the determination of the flow of funds across sectors.
Changes including declines in the real interest rate, the
price of investment, and corporate income taxes generate
increases in corporate profits and shifts in the supply of
sectoral saving that are of similar magnitude to those
observed in the data.
Are Profits Hurting Capitalism?
By YVES SMITH and ROB PARENTEAU
A STREAM of disheartening economic news last week,
including flagging consumer confidence and meager
private-sector job growth, is leading experts to worry that
the recession is coming back. At the same time, many
policymakers, particularly in Europe, are slashing government
budgets in an effort to lower debt levels and thereby restore
investor confidence, reduce interest rates and promote
growth.
There is an unrecognized problem with this approach:
Reductions in deficits have implications for the private
sector. Higher taxes draw cash from households and
businesses, while lower government expenditures withhold
money from the economy. Making matters worse, businesses are
already plowing fewer profits back into their own
enterprises.
Over the past decade and a half, corporations have been
saving more and investing less in their own businesses. A
2005 report from JPMorgan Research noted with concern that,
since 2002, American corporations on average ran a net
financial surplus of 1.7 percent of the gross domestic
product - a drastic change from the previous 40 years, when
they had maintained an average deficit of 1.2 percent of
G.D.P. More recent studies have indicated that companies in
Europe, Japan and China are also running unprecedented
surpluses.
The reason for all this saving in the United States is
that public companies have become obsessed with quarterly
earnings. To show short-term profits, they avoid investing in
future growth. To develop new products, buy new equipment or
expand geographically, an enterprise has to spend money - on
marketing research, product design, prototype development,
legal expenses associated with patents, lining up contractors
and so on.
Rather than incur such expenses, companies increasingly
prefer to pay their executives exorbitant bonuses, or issue
special dividends to shareholders, or engage in purely
financial speculation. But this means they also short-circuit
a major driver of economic growth.
Some may argue that businesses aren't investing in growth
because the prospects for success are so poor, but American
corporate profits are nearly all the way back to their peak,
right before the global financial crisis took hold.
Another problem for the economy is that, once the crisis
began, families and individuals started tightening their
belts, bolstering their bank accounts or trying to pay down
borrowings (another form of saving).
If households and corporations are trying to save more of
their income and spend less, then it is up to the other two
sectors of the economy - the government and the import-export
sector - to spend more and save less to keep the economy
humming. In other words, there needs to be a large trade
surplus, a large government deficit or some combination of
the two. This isn't a matter of economic theory; it's based
in simple accounting.
What if a government instead embarks on an austerity
program? Income growth will stall, and household wages and
business profits may fall....
On the one hand, the VoxEU article does a fine job of
assembling long-term data on a global basis. It demonstrates
that the corporate savings glut is long standing and that is
has been accompanied by a decline in personal savings.
However, it fails to depict what an unnatural state of
affairs this is. The corporate sector as a whole in
non-recessionary times ought to be net spending, as in
borrowing and investing in growth. As a market-savvy buddy
put it, "If a company isn't investing in the business of its
business, why should I?" I attributed the corporate savings
trend in the US as a result of the fixation of quarterly
earnings, which sources such as McKinsey partners with a
broad view of the firms' projects were telling me was killing
investment (any investment will have an income statement
impact too, such as planning, marketing, design, and start up
expenses). This post, by contrast, treats this development as
lacking in any agency. Labor share of GDP dropped and savings
rose. They attribute that to lower interest rates over time.
They again fail to see that as the result of power dynamics
and political choices....
Feb 28, 2017 6:03 PM EST
NEW YORK --
Amazon's cloud-computing service,
Amazon Web Services, experienced an outage in its eastern U.S.
region Tuesday afternoon, causing unprecedented and widespread
problems for thousands of websites and apps.
Amazon is the largest provider of cloud computing services in
the U.S. Beginning around midday Tuesday on the East Coast, one
region of its "S3" service based in Virginia began to experience
what Amazon, on its service site, called "increased error rates."
In a statement, Amazon said as of 4 p.m. E.T. it was still
experiencing "high error rates" that were "impacting various AWS
services."
"We are working hard at repairing S3, believe we understand root
cause, and are working on implementing what we believe will
remediate the issue," the company said.
But less than an hour later, an update offered good news: "As of
1:49 PM PST, we are fully recovered for operations for adding new
objects in S3, which was our last operation showing a high error
rate. The Amazon S3 service is operating normally," the company
said.
Amazon's Simple Storage Service, or S3, stores files and data
for companies on remote servers. It's used for everything from
building websites and apps to storing images, customer data and
customer transactions.
"Anything you can think about storing in the most cost-effective
way possible," is how Rich Mogull, CEO of data security firm
Securosis, puts it.
Amazon has a strong track record of stability with its cloud
computing service,
CNET
senior editor Dan Ackerman told CBS News.
"AWS... is known for having really good 'up time,'" he said,
using industry language.
Over time, cloud computing has become a major part of Amazon's
empire.
"Very few people host their own web servers anymore, it's all
been outsourced to these
big providers
, and Amazon is one of the major ones," Ackerman
said.
The problem Tuesday affected both "front-end" operations --
meaning the websites and apps that users see -- and back-end data
processing that takes place out of sight. Some smaller online
services, such as Trello, Scribd and IFTTT, appeared to be down for
a while, although all have since recovered.
Some affected websites had fun with the crash, treating it like
a snow day:
"... "From a sustainability and availability standpoint, we definitely need to look at our strategy to not be vendor specific, including with Amazon," said Lee. "That's something that we're aware of and are working towards." ..."
"... "Elastic load balances and other services make it easy to set up. However, it's a double-edged sword, because these types of services will also make it harder to be vendor-agnostic. When other cloud platform don't offer the same services, how do you wean yourself off of them?" ..."
"... Multi-year commitments are another trap, he said. And sometimes there's an extra unpleasant twist -- minimum usage requirements that go up in the later years, like balloon payments on a mortgage. ..."
The Amazon outage reminds companies that having all their eggs in one cloud basket might
be a risky strategy
"That is the elephant in the room these days," said Lee. "More and more companies are starting
to move their services to the cloud providers. I see attackers trying to compromise the cloud provider
to get to the information."
If attackers can get into the cloud systems, that's a lot of data they could have access to. But
attackers can also go after availability.
"The DDoS attacks are getting larger in scale, and with more IoT systems coming online and being
very hackable, a lot of attackers can utilize that as a way to do additional attacks," he said.
And, of course, there's always the possibility of a cloud service outage for other reasons.
The 11-hour outage that Amazon suffered in late February was due to a typo, and affected Netflix,
Reddit, Adobe and Imgur, among other sites.
"From a sustainability and availability standpoint, we definitely need to look at our strategy
to not be vendor specific, including with Amazon," said Lee. "That's something that we're aware of
and are working towards."
The problem is that Amazon offers some very appealing features.
"Amazon has been very good at providing a lot of services that reduce the investment that needs
to be made to build the infrastructure," he said. "Elastic load balances and other services make
it easy to set up. However, it's a double-edged sword, because these types of services will also
make it harder to be vendor-agnostic. When other cloud platform don't offer the same services, how
do you wean yourself off of them?"
... ... ...
"If you have a containerized approach, you can run in Amazon's container services, or on Azure,"
said Tim Beerman, CTO at Ensono , a managed
services provider that runs its own cloud data center, manages on-premises environments for customers,
and also helps clients run in the public cloud.
"That gives you more portability, you can pick something up and move it," he said.
But that, too, requires advance planning.
"It starts with the application," he said. "And you have to write it a certain way."
But the biggest contributing factor to cloud lock-in is data, he said.
"They make it really easy to put the data in, and they're not as friendly about taking that data
out," he said.
The lack of friendliness often shows up in the pricing details.
"Usually the price is lower for data transfers coming into a cloud service provider versus the
price to move data out," said Thales' Radford.
Multi-year commitments are another trap, he said. And sometimes there's an extra unpleasant twist
-- minimum usage requirements that go up in the later years, like balloon payments on a mortgage.
It is striking how the media feel such an extraordinary
need to blame robots and productivity growth for the recent
job loss in manufacturing rather than trade. We got yet
another example of this exercise in a New York Times piece *
by Claire Cain Miller, with the title "evidence that robots
are winning the race for American jobs." The piece highlights
a new paper * by Daron Acemoglu and Pascual Restrepo which
finds that robots have a large negative impact on wages and
employment.
While the paper has interesting evidence on the link
between the use of robots and employment and wages, some of
the claims in the piece do not follow. For example, the
article asserts:
"The paper also helps explain a mystery that has been
puzzling economists: why, if machines are replacing human
workers, productivity hasn't been increasing. In
manufacturing, productivity has been increasing more than
elsewhere - and now we see evidence of it in the employment
data, too."
Actually, the paper doesn't provide any help whatsoever in
solving this mystery. Productivity growth in manufacturing
has almost always been more rapid than productivity growth
elsewhere. Furthermore, it has been markedly slower even in
manufacturing in recent years than in prior decades.
According to the Bureau of Labor Statistics, productivity
growth in manufacturing has averaged less than 1.2 percent
annually over the last decade and less than 0.5 percent over
the last five years. By comparison, productivity growth
averaged 2.9 percent a year in the half century from 1950 to
2000.
The article is also misleading in asserting:
"The paper adds to the evidence that automation, more than
other factors like trade and offshoring that President Trump
campaigned on, has been the bigger long-term threat to
blue-collar jobs (emphasis added)."
In terms of recent job loss in manufacturing, and in
particular the loss of 3.4 million manufacturing jobs between
December of 2000 and December of 2007, the rise of the trade
deficit has almost certainly been the more important factor.
We had substantial productivity growth in manufacturing
between 1970 and 2000, with very little loss of jobs. The
growth in manufacturing output offset the gains in
productivity. The new part of the story in the period from
2000 to 2007 was the explosion of the trade deficit to a peak
of nearly 6.0 percent of GDP in 2005 and 2006.
It is also worth noting that we could in fact expect
substantial job gains in manufacturing if the trade deficit
were reduced. If the trade deficit fell by 2.0 percentage
points of GDP ($380 billion a year) this would imply an
increase in manufacturing output of more than 22 percent. If
the productivity of the manufacturing workers producing this
additional output was the same as the rest of the
manufacturing workforce it would imply an additional 2.7
million jobs in manufacturing. That is more jobs than would
be eliminated by productivity at the recent 0.5 percent
growth rate over the next forty years, even assuming no
increase in demand over this period.
While the piece focuses on the displacement of less
educated workers by robots and equivalent technology, it is
likely that the areas where displacement occurs will be
determined in large part by the political power of different
groups. For example, it is likely that in the not distant
future improvements in diagnostic technology will allow a
trained professional to make more accurate diagnoses than the
best doctor. Robots are likely to be better at surgery than
the best surgeon. The extent to which these technologies will
be be allowed to displace doctors is likely to depend more on
the political power of the American Medical Association than
the technology itself.
Finally, the question of whether the spread of robots will
lead to a transfer of income from workers to the people who
"own" the robots will depend to a large extent on our patent
laws. In the last four decades we have made patents longer
and stronger. If we instead made them shorter and weaker, or
better relied on open source research, the price of robots
would plummet and workers would be better positioned to
capture than gains of productivity growth as they had in
prior decades. In this story it is not robots who are taking
workers' wages, it is politicians who make strong patent
laws.
The robots are coming, whether Trump's Treasury secretary admits it or not
By Lawrence H. Summers - Washington Post
As I learned (sometimes painfully) during my time at the Treasury Department, words spoken
by Treasury secretaries can over time have enormous consequences, and therefore should be carefully
considered. In this regard, I am very surprised by two comments made by Secretary Steven Mnuchin
in his first public interview last week.
In reference to a question about artificial intelligence displacing American workers,Mnuchin
responded that "I think that is so far in the future - in terms of artificial intelligence taking
over American jobs - I think we're, like, so far away from that [50 to 100 years], that it is
not even on my radar screen." He also remarked that he did not understand tech company valuations
in a way that implied that he regarded them as excessive. I suppose there is a certain internal
logic. If you think AI is not going to have any meaningful economic effects for a half a century,
then I guess you should think that tech companies are overvalued. But neither statement is defensible.
Mnuchin's comment about the lack of impact of technology on jobs is to economics approximately
what global climate change denial is to atmospheric science or what creationism is to biology.
Yes, you can debate whether technological change is in net good. I certainly believe it is. And
you can debate what the job creation effects will be relative to the job destruction effects.
I think this is much less clear, given the downward trends in adult employment, especially for
men over the past generation.
But I do not understand how anyone could reach the conclusion that all the action with technology
is half a century away. Artificial intelligence is behind autonomous vehicles that will affect
millions of jobs driving and dealing with cars within the next 15 years, even on conservative
projections. Artificial intelligence is transforming everything from retailing to banking to the
provision of medical care. Almost every economist who has studied the question believes that technology
has had a greater impact on the wage structure and on employment than international trade and
certainly a far greater impact than whatever increment to trade is the result of much debated
trade agreements....
Oddly, the robots are always coming in articles like Summers', but they never seem to get here.
Automation has certainly played a role, but outsourcing has been a much bigger issue.
He has gotten a lot better and was supposedly pretty good when advising Obama, but he's sort
of reverted to form with the election of Trump and the prominence of the debate on trade policy.
Technology rearranges and changes human roles, but it makes entries on both sides of the ledger.
On net as long as wages grow then so will the economy and jobs. Trade deficits only help financial
markets and the capital owning class.
Summers is a good example of those economists that never seem to pay a price for their errors.
Imo, he should never be listened to. His economics is faulty. His performance in the Clinton
administration and his part in the Russian debacle should be enough to consign him to anonymity.
People would do well to ignore him.
Yeah he's one of those expert economists and technocrats who never admit fault. You don't become
Harvard President or Secretary of the Treasury by doing that.
One time that Krugman has admitted error was about productivity gains in the 1990s. He said
he didn't see the gains from computers in the numbers and it wasn't and they weren't there at
first, but later productivity numbers increased.
It was sort of like what Summers and Munchkin are talking discussing, but there's all sorts
of debate about measuring productivity and what it means.
Although Midnight Commander is a text mode application it can make use of
mouse. The openSUSE delivered mc will make use of the mouse when used
with a GUI
console, without any further configuration needed.
The text mode terminal that we
get when booting in
runlevels 2 or 3 is a bit different story. You have to install the package
gpm ("general purpose mouse") which is also called
mouse server. The gpm is used in Linux to receive movements and clicks from mouse. Start
gpm and then start Midnight commander.
If you come to the text terminal using Ctrl + Alt + F1, then
gpm will not work as another driver that belongs to
GUI (X
Server) claims control over the mouse.
... ... ...
FTP browsing
This is file browsing on remote FTP server just as it is on your computer.
Press F9 to select drop down menus on the top of the screen.
Press Alt + L if you want to use left side panel, or Alt + R
for right panel.
Press Alt + P for input box where you have enter server name. Enter for instance
ftp.gwdg.de/pub
and press Enter.
Now mc will try anonymous connection to remote machine. If machine responds, you'll get
directory listing of /pub on remote server.
It is possible to do the same from mc command line by typing:
cd /#ftp:ftp.gwdg.de/pub
Archive browsing
Archive in classic meaning is compressed file. In Linux you can recognize them by suffix like:
tgz, tar.gz, tbz, tar.bz2
and many more, but above few are the most used
Highlight the file
Press Enter
That's it. Midnight Commander will decompress file for you and present it's internal structure
like any other directory. If you want to extract one or all files from archive mark what you want
toextract and use F5 to copy in another panel. Done.
RPM browsing
The package installation files for any SUSE are
RPM and
mc will let you browse them.
Highlight the file
Press Enter
You'll see few files:
/INFO
CONTENTS.cpio
HEADER
*INSTALL
*UPGRADE
Browse to see details of your RPM.
The CONTENTS.cpio is actual archive with files, and if you want to see within:
Highlight the file
Press Enter
(You know the drill)
The *INSTALL and *UPGRADE will do what the name tells, but if you want only to extract one or
more files from CONTENTS.cpio than use F5 to copy them in the directory in the
other panel.
PuTTY and line drawing
PuTTY is terminal application used to access remote computers running Linux via ssh (SSH
tunnels from Microsoft Windows see details). The line drawing in Midnight Commander, YaST and
another applications that draw lines using special characters can be displayed wrong as something
else. The solution is to change settings:
menu: Window > Translation:
Received data assumed to be in which character set: UTF-8
Handling of line drawing characters: Use Unicode for line drawing
If that doesn't help, you may set this too:
menu: Connection > Connection-type string: linux
menu: Terminal > Keyboard > The Function keys and keypad: Linux
+ t r & ! t t
d Diff against file of same name in other directory
if [ "%d" = "%D" ]; then
echo "The two directores must be different"
exit 1
fi
if [ -f %D/%f ]; then # if two of them, then
diff -up %f %D/%f | sed -e 's/\(^-.*\)/\x1b[1;31m\1\x1b[0m/g' \
-e 's/\(^\+.*\)/\x1b[1;32m\1\x1b[0m/g' \
-e 's/\(^@.*\)/\x1b[36m\1\x1b[0m/g' | less -R
else
echo %f: No copy in %D/%f
fi
D Diff current directory against other directory
if [ "%d" = "%D" ]; then
echo "The two directores must be different"
exit 1
fi
diff -up %d %D | sed -e 's/\(^-.*\)/\x1b[1;31m\1\x1b[0m/g' \
-e 's/\(^\+.*\)/\x1b[1;32m\1\x1b[0m/g' \
-e 's/\(^@.*\)/\x1b[36m\1\x1b[0m/g' | less -R
fi
"... And it is not only automation vs. in-house labor. There is environmental/compliance cost (or lack thereof) and the fully loaded business services and administration overhead, taxes, etc. ..."
"... When automation increased productivity in agriculture, the government guaranteed free high school education as a right. ..."
"... Now Democrats like you would say it's too expensive. So what's your solution? You have none. You say "sucks to be them." ..."
"... And then they give you the finger and elect Trump. ..."
"... It wasn't only "low-skilled" workers but "anybody whose job could be offshored" workers. Not quite the same thing. ..."
"... It also happened in "knowledge work" occupations - for those functions that could be separated and outsourced without impacting the workflow at more expense than the "savings". And even if so, if enough of the competition did the same ... ..."
"... And not all outsourcing was offshore - also to "lowest bidders" domestically, or replacing "full time" "permanent" staff with contingent workers or outsourced "consultants" hired on a project basis. ..."
"... "People sure do like to attribute the cause to trade policy." Because it coincided with people watching their well-paying jobs being shipped overseas. The Democrats have denied this ever since Clinton and the Republicans passed NAFTA, but finally with Trump the voters had had enough. ..."
"... Why do you think Clinton lost Wisconsin, Michigan, Pennysylvania and Ohio? ..."
If it was technology why do US companies buy from low labor producers at the end of supply
chains 2000 - 10000 miles away? Why the transportation cost. Automated factories could be built
close by.
There is no such thing as an automated factory. Manufacturing is done by people, *assisted* by
automation. Or only part of the production pipeline is automated, but people are still needed
to fill in the not-automated pieces.
And it is not only automation vs. in-house labor. There is environmental/compliance cost
(or lack thereof) and the fully loaded business services and administration overhead, taxes, etc.
Trade policy put "low-skilled" workers in the U.S. in competition with workers in poorer countries.
What did you think was going to happen? The Democrat leadership made excuses. David Autor's TED
talk stuck with me. When automation increased productivity in agriculture, the government
guaranteed free high school education as a right.
Now Democrats like you would say it's too expensive. So what's your solution? You have
none. You say "sucks to be them."
And then they give you the finger and elect Trump.
It wasn't only "low-skilled" workers but "anybody whose job could be offshored" workers. Not
quite the same thing.
It also happened in "knowledge work" occupations - for those functions that could be separated
and outsourced without impacting the workflow at more expense than the "savings". And even if
so, if enough of the competition did the same ...
And not all outsourcing was offshore - also to "lowest bidders" domestically, or replacing
"full time" "permanent" staff with contingent workers or outsourced "consultants" hired on a project
basis.
"People sure do like to attribute the cause to trade policy." Because it coincided with people
watching their well-paying jobs being shipped overseas. The Democrats have denied this ever since
Clinton and the Republicans passed NAFTA, but finally with Trump the voters had had enough.
Why do you think Clinton lost Wisconsin, Michigan, Pennysylvania and Ohio?
Instead of looking at this as an excuse for job losses due to trade deficits then we should
be seeing it as a reason to gain back manufacturing jobs in order to retain a few more decent
jobs in a sea of garbage jobs. Mmm. that's so wrong. Working on garbage trucks are now some of
the good jobs in comparison. A sea of garbage jobs would be an improvement. We are in a sea of
McJobs.
Yes sir, often enough but not always. I had a great job as an IT large systems capacity planner
and performance analyst, but not as good as the landscaping, pool, and lawn maintenance for myself
that I enjoy now as a leisure occupation in retirement. My best friend died a greens keeper, but
he preferred landscaping when he was young. Another good friend of mine was a poet, now dying
of cancer if depression does not take him first.
But you are correct, no one but the welders, material handlers (paid to lift weights all day),
machinists, and then almost every one else liked their jobs at Virginia Metal Products, a union
shop, when I worked there the summer of 1967. That was on the swing shift though when all of the
big bosses were at home and out of our way. On the green chain in the lumber yard of Kentucky
flooring everyone but me wanted to leave, but my mom made me go into the VMP factory and work
nights at the primer drying kiln stacking finished panel halves because she thought the work on
the green chain was too hard. The guys on the green chain said that I was the first high school
graduate to make it past lunch time on their first day. I would have been buff and tan by the
end of summer heading off to college (where I would drop out in just ten weeks) had my mom not
intervened.
As a profession no group that I know is happier than auto mechanics that do the same work as
a hobby on their hours off that they do for a living at work, at least the hot rod custom car
freaks at Jamie's Exhaust & Auto Repair in Richmond, Virginia are that way. The power tool sales
and maintenance crew at Arthur's Electric Service Inc. enjoy their jobs too.
Despite the name which was on their incorporation done back when they rebuilt auto generators,
Arthur's sells and services lawnmowers, weed whackers, chain saws and all, but nothing electric.
The guy in the picture at the link is Robert Arthur, the founder's son who is our age roughly.
In theory, in the longer term, as robotics becomes the norm rather than
the exception, there will be no advantage in chasing cheap labour around
the world. Given ready access to raw materials, the labour costs of
manufacturing in Birmingham should be no different to the labour costs
in Beijing. This will require the democratisation of the ownership of
technology. Unless national governments develop commonly owned
technology the 1% will truly become the organ grinders and everyone else
the monkeys. One has only to look at companies like Microsoft and Google
to see a possible future - bigger than any single country and answerable
to no one. Common ownership must be the future. Deregulation and market
driven economics are the road technological serfdom.
Except that the raw materials for steel production are available
in vast quantities in China.
You are also forgetting land. The
power remains with those who own it. Most of Central London is
still owned by the same half dozen families as in 1600.
Reply
Share
You can only use robotics in countries that have the labour with the
skills to maintain them.Robots do not look after themselves they need
highly skilled technicians to keep them working. I once worked for a
Japanese company and they only used robots in the higher wage high skill
regions. In low wage economies they used manual labour and low tech
products.
"... And all costs are labor costs. It it isn't labor cost, it's rents and economic profit which mean economic inefficiency. An inefficient economy is unstable. Likely to crash or drive revolution. ..."
"... Free lunch economics seeks to make labor unnecessary or irrelevant. Labor cost is pure liability. ..."
"... Yet all the cash for consumption is labor cost, so if labor cost is a liability, then demand is a liability. ..."
"... Replace workers with robots, then robots must become consumers. ..."
"... "Replace workers with robots, then robots must become consumers." Well no - the OWNERS of robots must become consumers. ..."
"... I am old enough to remember the days of good public libraries, free university education, free bus passes for seniors and low land prices. Is the income side of the equation all that counts? ..."
Robots and Inequality: A Skeptic's Take : Paul Krugman presents "
Robot Geometry " based on
Ryan Avent 's "Productivity Paradox". It's more-or-less the skill-biased technological change
hypothesis, repackaged. Technology makes workers more productive, which reduces demand for workers,
as their effective supply increases. Workers still need to work, with a bad safety net, so they
end up moving to low-productivity sectors with lower wages. Meanwhile, the low wages in these
sectors makes it inefficient to invest in new technology.
My question: Are Reagan-Thatcher countries the only ones with robots? My image, perhaps it is
wrong, is that plenty of robots operate in
Japan and
Germany too, and both
countries are roughly just as technologically advanced as the US. But Japan and Germany haven't
seen the same increase in inequality as the US and other Anglo countries after 1980 (graphs below).
What can explain the dramatic differences in inequality across countries? Fairly blunt changes
in labor market institutions, that's what. This goes back to Peter Temin's "
Treaty of
Detroit " paper and the oddly ignored series of papers by
Piketty, Saez and coauthors which argues that changes in top marginal tax rates can largely
explain the evolution of the Top 1% share of income across countries. (Actually, it goes back
further -- people who work in Public Economics had "always" known that pre-tax income is sensitive
to tax rates...) They also show that the story of inequality is really a story of incomes at the
very top -- changes in other parts of the income distribution are far less dramatic. This evidence
also is not suggestive of a story in which inequality is about the returns to skills, or computer
usage, or the rise of trade with China. ...
Yet another economist bamboozled by free lunch economics.
In free lunch economics, you never consider demand impacted by labor cost changed.
TANSTAAFL so, cut labor costs and consumption must be cut.
Funny things can be done if money is printed and helicopter dropped unequally.
Printed money can accumulate in the hands of the rentier cutting labor costs and pocketing
the savings without cutting prices.
Free lunch economics invented the idea price equals cost, but that is grossly distorting.
And all costs are labor costs. It it isn't labor cost, it's rents and economic profit which
mean economic inefficiency. An inefficient economy is unstable. Likely to crash or drive revolution.
Free lunch economics seeks to make labor unnecessary or irrelevant. Labor cost is pure
liability.
Yet all the cash for consumption is labor cost, so if labor cost is a liability, then demand
is a liability.
Replace workers with robots, then robots must become consumers.
I am old enough to remember the days of good public libraries, free university education,
free bus passes for seniors and low land prices. Is the income side of the equation all that counts?
People are worried about robots taking jobs. Driverless cars are around the corner. Restaurants
and shops increasingly carry the option to order by touchscreen. Google's clever algorithms provide
instant translations that are remarkably good.
But the economy does not feel like one undergoing a technology-driven productivity boom. In
the late 1990s, tech optimism was everywhere. At the same time, wages and productivity were rocketing
upward. The situation now is completely different. The most recent jobs reports in America and
Britain tell the tale. Employment is growing, month after month after month. But wage growth is
abysmal. So is productivity growth: not surprising in economies where there are lots of people
on the job working for low pay.
The obvious conclusion, the one lots of people are drawing, is that the robot threat is totally
overblown: the fantasy, perhaps, of a bubble-mad Silicon Valley - or an effort to distract from
workers' real problems, trade and excessive corporate power. Generally speaking, the problem is
not that we've got too much amazing new technology but too little.
This is not a strawman of my own invention. Robert Gordon makes this case. You can see Matt
Yglesias make it here. * Duncan Weldon, for his part, writes: **
"We are debating a problem we don't have, rather than facing a real crisis that is the polar
opposite. Productivity growth has slowed to a crawl over the last 15 or so years, business investment
has fallen and wage growth has been weak. If the robot revolution truly was under way, we would
see surging capital expenditure and soaring productivity. Right now, that would be a nice 'problem'
to have. Instead we have the reality of weak growth and stagnant pay. The real and pressing concern
when it comes to the jobs market and automation is that the robots aren't taking our jobs fast
enough."
And in a recent blog post Paul Krugman concluded: *
"I'd note, however, that it remains peculiar how we're simultaneously worrying that robots
will take all our jobs and bemoaning the stalling out of productivity growth. What is the story,
really?"
What is the story, indeed. Let me see if I can tell one. Last fall I published a book: "The
Wealth of Humans". In it I set out how rapid technological progress can coincide with lousy growth
in pay and productivity. Start with this:
"Low labour costs discourage investments in labour-saving technology, potentially reducing
productivity growth."
This is an old concern in economics; it's "capital-biased technological change," which tends to
shift the distribution of income away from workers to the owners of capital....
Catherine Rampell and Nick Wingfield write about the growing evidence * for "reshoring" of
manufacturing to the United States. * They cite several reasons: rising wages in Asia; lower energy
costs here; higher transportation costs. In a followup piece, ** however, Rampell cites another
factor: robots.
"The most valuable part of each computer, a motherboard loaded with microprocessors and memory,
is already largely made with robots, according to my colleague Quentin Hardy. People do things
like fitting in batteries and snapping on screens.
"As more robots are built, largely by other robots, 'assembly can be done here as well as anywhere
else,' said Rob Enderle, an analyst based in San Jose, California, who has been following the
computer electronics industry for a quarter-century. 'That will replace most of the workers, though
you will need a few people to manage the robots.' "
Robots mean that labor costs don't matter much, so you might as well locate in advanced countries
with large markets and good infrastructure (which may soon not include us, but that's another
issue). On the other hand, it's not good news for workers!
This is an old concern in economics; it's "capital-biased technological change," which tends
to shift the distribution of income away from workers to the owners of capital.
Twenty years ago, when I was writing about globalization and inequality, capital bias didn't
look like a big issue; the major changes in income distribution had been among workers (when you
include hedge fund managers and CEOs among the workers), rather than between labor and capital.
So the academic literature focused almost exclusively on "skill bias", supposedly explaining the
rising college premium.
But the college premium hasn't risen for a while. What has happened, on the other hand, is
a notable shift in income away from labor:
[Graph]
If this is the wave of the future, it makes nonsense of just about all the conventional wisdom
on reducing inequality. Better education won't do much to reduce inequality if the big rewards
simply go to those with the most assets. Creating an "opportunity society," or whatever it is
the likes of Paul Ryan etc. are selling this week, won't do much if the most important asset you
can have in life is, well, lots of assets inherited from your parents. And so on.
I think our eyes have been averted from the capital/labor dimension of inequality, for several
reasons. It didn't seem crucial back in the 1990s, and not enough people (me included!) have looked
up to notice that things have changed. It has echoes of old-fashioned Marxism - which shouldn't
be a reason to ignore facts, but too often is. And it has really uncomfortable implications.
But I think we'd better start paying attention to those implications.
"The most valuable part of each computer, a motherboard loaded with microprocessors and memory,
is already largely made with robots, according to my colleague Quentin Hardy. People do things
like fitting in batteries and snapping on screens.
"...already largely made..."? already? circuit boards were almost entirely populated by machines
by 1985, and after the rise of surface mount technology you could drop the "almost". in 1990 a
single machine could place 40k+/hour parts small enough they were hard to pick up with fingers.
And now for something completely different. Ryan Avent has a nice summary * of the argument
in his recent book, trying to explain how dramatic technological change can go along with stagnant
real wages and slowish productivity growth. As I understand it, he's arguing that the big tech
changes are happening in a limited sector of the economy, and are driving workers into lower-wage
and lower-productivity occupations.
But I have to admit that I was having a bit of a hard time wrapping my mind around exactly
what he's saying, or how to picture this in terms of standard economic frameworks. So I found
myself wanting to see how much of his story could be captured in a small general equilibrium model
- basically the kind of model I learned many years ago when studying the old trade theory.
Actually, my sense is that this kind of analysis is a bit of a lost art. There was a time when
most of trade theory revolved around diagrams illustrating two-country, two-good, two-factor models;
these days, not so much. And it's true that little models can be misleading, and geometric reasoning
can suck you in way too much. It's also true, however, that this style of modeling can help a
lot in thinking through how the pieces of an economy fit together, in ways that algebra or verbal
storytelling can't.
So, an exercise in either clarification or nostalgia - not sure which - using a framework that
is basically the Lerner diagram, ** adapted to a different issue.
Imagine an economy that produces only one good, but can do so using two techniques, A and B,
one capital-intensive, one labor-intensive. I represent these techniques in Figure 1 by showing
their unit input coefficients:
[Figure 1]
Here AB is the economy's unit isoquant, the various combinations of K and L it can use to produce
one unit of output. E is the economy's factor endowment; as long as the aggregate ratio of K to
L is between the factor intensities of the two techniques, both will be used. In that case, the
wage-rental ratio will be the slope of the line AB.
Wait, there's more. Since any point on the line passing through A and B has the same value,
the place where it hits the horizontal axis is the amount of labor it takes to buy one unit of
output, the inverse of the real wage rate. And total output is the ratio of the distance along
the ray to E divided by the distance to AB, so that distance is 1/GDP.
You can also derive the allocation of resources between A and B; not to clutter up the diagram
even further, I show this in Figure 2, which uses the K/L ratios of the two techniques and the
overall endowment E:
[Figure 2]
Now, Avent's story. I think it can be represented as technical progress in A, perhaps also
making A even more capital-intensive. So this would amount to a movement southwest to a point
like A' in Figure 3:
[Figure 3]
We can see right away that this will lead to a fall in the real wage, because 1/w must rise.
GDP and hence productivity does rise, but maybe not by much if the economy was mostly using the
labor-intensive technique.
And what about allocation of labor between sectors? We can see this in Figure 4, where capital-using
technical progress in A actually leads to a higher share of the work force being employed in labor-intensive
B:
[Figure 4]
So yes, it is possible for a simple general equilibrium analysis to capture a lot of what Avent
is saying. That does not, of course, mean that he's empirically right. And there are other things
in his argument, such as hypothesized effects on the direction of innovation, that aren't in here.
But I, at least, find this way of looking at it somewhat clarifying - which, to be honest,
may say more about my weirdness and intellectual age than it does about the subject.
I think this illustrates my point very clearly. If you had charts of wealth by age it would be
even clearer. Without a knowledge of the discounted expected value of public pensions it is hard
to draw any conclusions from this list.
I know very definitely that in Australia and the UK people are very reliant on superannuation
and housing assets. In both Australia and the UK it is common to sell expensive housing in the
capital and move to cheaper coastal locations upon retirement, investing the capital to provide
retirement income. Hence a larger median wealth is NEEDED.
It is hard otherwise to explain the much higher median wealth in Australia and the UK.
Ryan Avent's analysis demonstrates what is wrong with the libertarian, right wing belief that
cheap labor is the answer to every problem when in truth cheap labor is the source of many of
our problems.
Spencer,
as I have said before, I don't really care to much what wages are - I care about income. It is
low income that is the problem. I'm a UBI guy, if money is spread around, and workers can say
no to exploitation, low wages will not be a problem.
Have we not seen a massive shift in pretax income distribution? Yes ... which tells me that
changes in tax rate structures are not the only culprit. Though they are an important culprit.
Maybe - but
1. changes in taxes can affect incentives (especially think of real investment and corporate taxes
and also personal income taxes and executive remuneration);
2. changes in the distribution of purchasing power can effect the way growth in the economy occurs;
3. changes in taxes also affect government spending and government spending tends to be more progressively
distributed than private income.
Composite Services labor hours increase with poor productivity growth - output per hour of
labor input. Composite measure of service industry output is notoriously problematic (per BLS
BEA).
Goods labor hours decrease with increasing productivity growth. Goods output per hour easy
to measure and with the greatest experience and knowledge.
Put this together and composite national productivity growth rate can't grow as fast as services
consume more of labor hours.
Simple arithmetic.
Elaboration on Services productivity measures:
How do you measure a retail clerks unit output?
How do you measure an engineer's unit output?
How do equilibrate retail clerk output with engineer's outuput for a composite output?
Now add the composite retail clerk labor hours to engineering labor hours... which dominates in
composite labor hours? Duh! So even in services the productivity is weighted heavily to the lowest
productivity job market.
Substitute Hospitality services for Retail Clerk services. Substitute truck drivers services
for Hospitality Services, etc., etc., etc.
I have spent years tracking productivity in goods production of various types ... mining, non-tech
hardware production, high tech hardware production in various sectors of high tech. The present
rates of productivity growth continue to climb (never decline) relative to the past rates in each
goods production sector measured by themselves.
But the proportion of hours in goods production in U.S. is and has been in continual decline
even while value of output has increased in each sector of goods production.
Here's an interesting way to start thinking about Services productivity.
There used to be reasonably large services sector in leisure and business travel agents. Now
there is nearly none... this has been replaced by on-line computer based booking. So travel agent
or equivalent labor hours is now near zippo. Productivity of travel agents went through the roof
in the 1990's & 2000's as the number of people / labor hours dropped like a rock. Where did those
labor hours end up? They went to lower paying services or left the labor market entirely. So lower
paying lower productivity services increased as a proportion of all services, which in composite
reduced total serviced productivity.
You can do the same analysis for hundreds of service jobs that no longer even exist at all
--- switch board operators for example when the way of buggy whip makers and horse-shoe services).
Now take a little ride into the future... not to distant future. When autonomous vehicles become
the norm or even a large proportion of vehicles, and commercial drivers (taxi's, trucking, delivery
services) go the way of horse-shoe services the labor hours for those services (land transportation
of goods & people) will drop precipitously, even as unit deliveries increase, productivity goes
through the roof, but since there's almost no labor hours in that service the composite effect
on productivity in services will drop because the displaced labor hours will end up in a lower
productivity services sector or out of the elabor market entirely.
Economists are having problems reconciling composite productivity growth rates with increasing
rates of automation. So they end up saying "no evidence" of automation taking jobs or something
to the effect "not to fear, robotics isn't evident as a problem we have to worry about".
But they know by observation all around them that automation is increasing productivity in
the goods sector, so they can't really discount automation as an issue without shutting their
eyes to everything they see with their "lying eyes". Thus they know deep down that they will have
to be reconcile this with BLS and BEA measures.
Ten years aog this wasn't even on economist's radars. Today it's at least being looked into
with more serious effort.
Ten years ago politicians weren't even aware of the possibility of any issues with increasing
rates of automation... they thought it's always increased with increasing labor demand and growth,
so why would that ever change? Ten years ago they concluded it couldn't without even thinking
about it for a moment. Today it's on their radar at least as something that bears perhaps a little
more thought.
Not to worry though... in ten more years they'll either have real reason to worry staring them
in the face, or they'll have figured out why they were so blind before.
Reminds me of not recognizing the "shadow banking" enterprises that they didn't see either
until after the fact.
Or that they thought the risk rating agencies were providing independent and valid risk analysis
so the economists couldn't reconcile the "low level" of market risks risk with everything else
so they just assumed "everything" else was really ok too... must be "irrational exuberance" that's
to blame.
Let me add that the term "robotics" is a subset of automation. The major distinction is only that
a form of automation that includes some type of 'articulation' and/or some type of dynamic decision
making on the fly (computational branching decision making in nano second speeds) is termed 'robotics'
because articulation and dynamic decision making are associated with human capabilities rather
then automatic machines.
It makes no difference whether productivity gains occur by an articulated machine or one that
isn't... automation just means replacing people's labor with something that improves humans capacity
to produce an output.
When mechanical leverage was invented 3000 or more years ago it was a form of automation, enabling
humans to lift, move heavier objects with less human effort (less human energy).
When performing the change process, metadata is used for analytical purposes. This may be in the
form of reports or a direct search in the database or the databases where metadata is maintained.
Trace information is often used-for instance, to determine in which configuration item changes are
required due to an event. Also information about variants or branches belonging to a configuration
item is used to determine if a change has effects in several places.
Finally metadata may be used to determine if a configuration item has other outstanding event
registrations, such as whether other changes are in the process of being implemented or are awaiting
a decision about implementation.
Consequence Analysis
When analyzing an event, you must consider the cost of implementing changes. This is not always
a simple matter. The following checklists, adapted from a list by Karl Wiegers, may help in analyzing
the effects of a proposed change. The lists are not exhaustive and are meant only as inspiration.
Identify
All requirements affected by or in conflict with the proposed change
The consequences of not introducing the proposed change
Possible adverse effects and other risks connected with implementation
How much of what has already been invested in the product will be lost if the proposed change
is implemented-or if it is not
Check if the proposed change
Has an effect on nonfunctional requirements, such as performance requirements (ISO 9126, a
standard for quality characteristics, defines six characteristics: functional, performance, availability,
usability, maintainability, and portability. The latter five are typically referred to as nonfunctional.)
May be introduced with known technology and available resources
Will cause unacceptable resource requirements in development or test
Will entail a higher unit price
Will affect marketing, production, services, or support
Follow-on effects may be additions, changes, or removals in
User interfaces or reports, internal or external interfaces, or data storage
Designed objects, source code, build scripts, include files
Test plans and test specifications
Help texts, user manuals, training material, or other user documentation
Project plan, quality plan, configuration management plan, and other plans
Other systems, applications, libraries, or hardware components
Roles
The configuration (or change) control board (CCB) is responsible for change control. A configuration
control board may consist of a single person, such as the author or developer when a document or
a piece of code is first written, or an agile team working in close contact with users and sponsors,
if work can be performed in an informal way without bureaucracy and heaps of paper. It may also-and
will typically, for most important configuration items-consist of a number of people, such as the
project manager, a customer representative, and the person responsible for quality assurance.
Process Descriptions
The methods, conventions, and procedures necessary for carrying out the activities in change control
may be
Description of the change control process structure
Procedures in the life cycles of events and changes
Convention(s) for forming different types of configuration control boards
Definition of responsibility for each type of configuration control board
Template(s) for event registration
Template(s) for change request
Connection with Other Activities
Change control is clearly delimited from other activities in configuration management, though
all activities may be implemented in the same tool in an automated system. Whether change control
is considered a configuration management activity may differ from company to company. Certainly it
is tightly coupled with project management, product management, and quality assurance, and in some
cases is considered part of quality assurance or test activities. Still, when defining and distributing
responsibilities, it's important to keep the boundaries clear, so change control is part of configuration
management and nothing else.
Example
Figure
1–10 shows an example of a process diagram for change control. A number of processes are depicted
in the diagram as boxes with input and output sections (e.g., "Evaluation of event registration").
All these processes must be defined and, preferably, described. 1.5 Status Reporting
Status reporting makes available, in a useful and readable way, the information necessary to effectively
manage a product's development and maintenance. Other activity areas in configuration management
deliver the data foundation for status reporting, in the form of metadata and change control data.
Status reporting entails extraction, arrangement, and formation of these data according to demand.
Figure
1–11 shows how status reporting is influenced by its surroundings .
The result of status reporting is the generation of status report(s). Each company must define
the reports it should be possible to produce. This may be a release note, an item list (by status,
history, or composition), or a trace matrix. It should also be possible to extract ad hoc information
on the basis of a search in the available data.
Process Descriptions
The methods, conventions, and procedures necessary for the activities in status re-porting may
be
Procedure(s) for the production of available status reports
Procedure(s) for ad hoc extraction of information
Templates for status reports that the configuration management system should be able to produce
Roles
The librarian is responsible for ensuring that data for and information in status reports are
correct, even when reporting is fully automated. Users themselves should be able to extract as many
status reports as possible. Still, it may be necessary to involve a librarian, especially if metadata
and change data are spread over different media.
Connection with Other Activities
Status reporting depends on correct and sufficient data from other activity areas in configuration
management. It's important to understand what information should be available in status reports,
so it can be specified early on. It may be too late to get information in a status report if the
information was requested late in the project and wasn't collected. Status reports from the configuration
management system can be used within almost all process areas in a company. They may be an excellent
source of metrics for other process areas, such as helping to identify which items have had most
changes made to them, so these items can be the target of further testing or redesign. 1.6 False
Friends: Version Control and Baselines
The expression "false friends" is used in the world of languages. When learning a new language,
you may falsely think you know the meaning of a specific word, because you know the meaning of a
similar word in your own or a third language. For example, the expression faire exprs in French
means "to do something on purpose," and not, as you might expect, "to do something fast." There are
numerous examples of "false friends"-some may cause embarrassment, but most "just" cause confusion.
This section discusses the concepts of "version control" and "baseline." These terms are frequently
used when talking about configuration management, but there is no common and universal agreement
on their meaning. They may, therefore, easily become "false friends" if people in a company use them
with different meanings. The danger is even greater between a company and a subcontractor or customer,
where the possibility of cultural differences is greater than within a single company. It is hoped
that this section will help reduce misunderstandings.
Version Control
"Version control" can have any of the following meanings:
Configuration management as such
Configuration management of individual items, as opposed to configuration management of deliveries
Control of versions of an item (identification and storage of items) without the associated
change control (which is a part of configuration management)
Storage of intermediate results (backup of work carried out over a period of time for the
sole benefit of the producer)
It's common but inadvisable to use the terms "configuration management" and "version control"
indiscriminately. A company must make up its mind as to which meaning it will attach to "version
control" and define the term relative to the meaning of configuration management. The term "version
control" is not used in this book unless its meaning is clear from the context. Nor does the concept
exist in IEEE standards referred to in this book, which use "version" in the sense of "edition."
Baseline
"Baseline" can have any of the following meanings:
An item approved and placed in storage in a controlled library
A delivery (a collection of items released for usage)
A configuration item, usually a delivery, connected to a specific milestone in a project
"Configuration item" as used in this book is similar to the first meaning of "baseline" in the
previous list. "Delivery" is used in this book in the sense of a collection of configuration items
(in itself a configuration item), whether or not such a delivery is associated with a milestone or
some other specific event-similar to either the second or third meaning in the list, depending on
circumstances.
The term "baseline" is not used in this book at all, since misconceptions could result from the
many senses in which it's used. Of course, nothing prevents a company from using the term "baseline,"
as long as the sense is clear to everyone involved.
The typical UNIX® administrator has a key range of utilities, tricks,
and systems he or she uses regularly to aid in the process of
administration. There are key utilities, command-line chains, and scripts
that are used to simplify different processes. Some of these tools come
with the operating system, but a majority of the tricks come through
years of experience and a desire to ease the system administrator's life.
The focus of this series is on getting the most from the available tools
across a range of different UNIX environments, including methods of
simplifying administration in a heterogeneous environment.
The unattended script problem
There are many issues around executing unattended scripts-that is,
scripts that you run either automatically through a service like cron or
at
commands.
The default mode of cron and
at
commands, for example, is
for the output of the script to be captured and then emailed to the user
that ran the script. You don't always want the user to get the email that
cron sends by default (especially if everything ran fine)-sometimes the
user who ran the script and the person actually responsible for
monitoring that output are different.
Therefore, you need better methods for trapping and identifying errors
within the script, better methods for communicating problems, and
optional successes to the appropriate person.
Getting the scripts set up correctly is vital; you need to ensure that
the script is configured in such a way that it's easy to maintain and
that the script runs effectively. You also need to be able to trap errors
and output from programs and ensure the security and validity of the
environment in which the script executes. Read along to find out how to
do all of this.
Setting up the environment
Before getting into the uses of unattended scripts, you need to make
sure that you have set up your environment properly. There are various
elements that need to be explicitly configured as part of your script,
and taking the time to do this not only ensures that your script runs
properly, but it also makes the script easier to maintain.
Some things you might need to think about include:
Search path for applications
Search path for libraries
Directory locations
Creating directories or paths
Common files
Some of these elements are straightforward enough to organize. For
example, you can set the path using the following in most
Bourne-compatible shells (sh, Bash, ksh, and zsh):
1
PATH=/usr/bin:/bin:/usr/sbin
For directory and file locations, just set a variable at the header of
the script. You can then use the variable in each place where you would
have used the filename. For example, when writing to a log file, you
might use
Listing 1
.
By setting the name once and then using the variable, you ensure that
you don't get the filename wrong, and if you need to change the filename
name, you only need to change the name once.
Using a single filename and variable also makes it very easy to create
a complex filename. For example, adding a date to your log filename is
made easier by using the
date
command with a format
specification:
1
DATE='date +%Y%m%d.%H%M'
The above command creates a string containing the date in the format
YYYYMMDD.HHMM, for example, 20070524.2359. You can insert that date
variable into a filename so that your log file is tagged according to the
date it was created.
If you are not using a date/time unique identifier in the log
filename, it's a good idea to insert some other unique identifier in case
two scripts are run simultaneously. If your script is writing to the same
file from two different processes, you will end up either with corrupted
information or missing information.
All shells support a unique shell ID, based on the shell process ID,
and are accessible through the special
$$
variable name. By
using a global log variable, you can easily create a unique file to be
used for logging:
1
LOGFILE=/tmp/$$.err
You can also apply the same global variable principles to directories:
1
LOGDIR=/var/log/my_app
To ensure that the directories are created, use the
-p
option for mkdir to create the entire path of the directory you want to
use:
1
mkdir -p $LOGDIR
Fortunately, this format won't complain if the directories already
exist, which makes it ideal for running in an unattended script.
Finally, it is generally a good idea to use full path names rather
than localized paths in your unattended scripts so that you can use the
previous principles together.
Listing 2. Using full path names in
unattended scripts
Now that you've set up the environment, let's look at how you can use
these principles to help with the general, unattended scripts.
Writing a log file
Probably the simplest improvement you can make to your scripts is to
write the output from your script to a log file. You might not think this
is necessary, but the default operation of cron is to save the output
from the script or command that was executed, and then email it to the
user who owned the crontab or at job.
This is less than perfect for a number of reasons. First of all, the
configured user that might be running the script might not be the same as
the real person that needs to handle the output. You might be running the
script as root, even though the output of the script or command when run
needs to go to somebody else. Setting up a general filter or redirection
won't work if you want to send the output of different commands to
different users.
The second reason is a more fundamental one. Unless something goes
wrong, it's not necessary to receive the output from a script . The cron
daemon sends you the output from stdout and stderr, which means that you
get a copy of the output, even if the script executed successfully.
The final reason is about the management and organization of the
information and output generated. Email is not always an efficient way of
recording and tracking the output from the scripts that are run
automatically. Maybe you just want to keep an archive of the log file
that was a success or email a copy of the error log in the event of a
problem.
Writing out to a log file can be handled in a number of different
ways. The most straightforward way is to redirect output to a file for
each command (see
Listing 3
).
Listing 3. Redirecting output to a file
1
2
cd /shared
rsync --delete
--recursive . /backups/shared >$LOGFILE
If you want to combine error and standard output into a single file,
use numbered redirection (see
Listing 4
).
Listing 4. Combining error and standard
output into a single file
1
2
cd /shared
rsync --delete
--recursive . /backups/shared >$LOGFILE 2>&1
Listing 4
writes out the information to the same log file.
You might also want to write out the information to separate files
(see
Listing 5
).
Listing 5. Writing out information to
separate files
1
2
cd /shared
rsync --delete
--recursive . /backups/shared >$LOGFILE 2>$ERRFILE
For multiple commands, the redirections can get complex and
repetitive. You must ensure, for example, that you are appending, not
overwriting, information to the log file (see
Listing 6
).
Listing 6. Appending information to the log
file
1
2
cd /etc
rsync --delete
--recursive . /backups/etc >>$LOGFILE >>$ERRFILE
A simpler solution, if your shell supports it, is to use an inline
block for a group of commands, and then to redirect the output from the
block as a whole. The result is that you can rewrite the lines in
Listing 7
using the structure in
Listing 8
.
Listing 7. Logging in long form
1
2
3
4
5
cd /shared
rsync --delete
--recursive . /backups/shared >$LOGFILE 2>$ERRFILE
cd /etc
rsync --delete
--recursive . /backups/etc >>$LOGFILE 2>>$ERRFILE
Listing 8
shows an inline block for grouping commands.
Listing 8. Logging using a block
1
2
3
4
5
6
7
8
{
cd
/shared
rsync
--delete --recursive . /backups/shared
cd
/etc
rsync
--delete --recursive . /backups/etc
} >$LOGFILE
2>$ERRFILE
The enclosing braces imply a subshell so that all the commands in the
block are executed as if part of a separate process (although no
secondary shell is created, the enclosing block is just treated as a
different logical environment). Using the subshell, you can collectively
redirect their standard and error output for the entire block instead of
for each individual command.
Trapping errors and reporting them
One of the main advantages of the subshell is that you can place a
wrapper around the main content of the script, redirect the errors, and
then send a formatted email with the status of the script execution.
For example,
Listing 9
shows a more complete script that sets up the environment,
executes the actual commands and bulk of the process, traps the output,
and then sends an email with the output and error information.
Listing 9. Using a subshell for emailing a
more useful log
If you use the subshell trick and your shell supports shell options
(Bash, ksh, and zsh), then you might want to optionally set some shell
options to ensure that the block is terminated correctly on an error. For
example, the
-e
(errexit) option within Bash ensures that
the shell terminates when a simple command (for example, any external
command called through the script) causes immediate termination of the
shell.
In
Listing 9
, for example, if the first rsync failed, then the subshell
would just continue and run the next command. However, there are times
when you want to stop the moment a command fails because continuing could
be more damaging. By setting errexit, the subshell immediately terminates
when the first command stops.
Setting options and ensuring security
Another issue with automated scripts is ensuring the security of the
script and, in particular, ensuring that script does not fail because of
bad configuration. You can use shell options for this process.
Other options you might want to set in a shell-independent manner (and
the richer the shell, the better, as a rule, at trapping these
instances). In the Bash shell, for example,
-u
ensures that
any unset variables are treated as an error. This can be useful to ensure
that an unattended script does not try to execute when a required
variable has not been configured correctly.
The
-C
option (noclobber) ensures that files are not
overwritten if they already exist, and it can prevent the script from
overwriting files it shouldn't have access too (for example, the system
files), unless the script has the correct commands to delete the original
file first.
Each of these options can be set using the
set
command
(see
Listing 10
).
Listing 10. Using the set command to set
options
1
2
set -e
set -C
You can use a plus sign before the option to disable it.
Another area where you might want to improve the security and
environment of your script is to use resource limits. Resource limits can
be set by the
ulimit
command, which is generally specific to
the shell, and enable you to limit the size of files, cores, memory use,
and even the duration of the script to ensure that the script does not
run away with itself.
For example, you can set CPU time in seconds using the following
command:
1
ulimit -t 600
Although ulimit does not offer complete protection, it helps in those
scripts where the potential for the script to run away with itself, or a
program to suddenly use a large amount of memory, might become a problem.
Capturing faults
You have already seen how to trap errors, output, and create logs that
can be emailed to the appropriate person when they occur, but what if you
want to be more specific about the errors and responses?
Two tools are useful here. The first is the return status from a
command, and the second is the
trap
command within your
shell.
The return status from a command can be used to identify whether a
particular command ran correctly, or whether it generated some sort of
error. The exact meaning for a specific return status code is unique to a
particular command (check the man pages), but a generally accepted
principle is that an error code of zero means that the command executed
correctly.
For example, imagine that you want to trap an error when trying to
create a directory. You can check the
$?
variable after mkdir
and then email the output, as shown in
Listing 11
.
Listing 11. Trapping return status
1
2
3
4
5
6
7
8
ERRLOG=/tmp/$$.err
mkdir /tmp 2>>$ERRLOG
if [ $? -ne 0 ]
then
mailx
-s "Script failed when making directory" admin
<$ERRLOG
exit
1
fi
Incidentally, you can use the return status code information inline by
chaining commands with the && or || symbols to act as an
and
,
or
, or
type
statement. For example, say you
want to ensure that the directory gets created and the command gets
executed but, if the directory is not created, the command does not get
executed. You could do that using an
if
statement (see
Listing 12
).
Listing 12. Ensuring that a directory is
created before executing a command
1
2
3
4
5
mkdir /tmp/out
if [ $? -eq 0 ]
then
do_something
fi
The above statement basically reads, "Make a directory and, if it
completes successfully, also run the command." In essence, only do the
second command if the first completes correctly.
The || symbol works in the opposite way; if the first command does not
complete successfully, then execute the second. This can be useful for
trapping situations where a command would raise an error, but instead
provides an alternative solution. For example, when changing to a
directory, you might use the line:
1
cd /tmp/out || mkdir
/tmp/out
This line of code tries to change the directory and, if it fails,
(probably because the directory does not exist), you make it.
Furthermore, you can combine these statements together. In the previous
example, of course, what you want to do is change to the directory, or
create it and then change to that directory if it doesn't already exist.
You can write that in one line as:
1
cd /tmp/out || mkdir
/tmp/out && cd /tmp/out
The
trap
command is a more generalized solution for
trapping more serious errors based on the signals raised when a command
fails, such as core dump, memory error, or when a command has been
forcibly terminated by a
kill
command.
To use trap, you specify the command or function to be executed when
the signal is trapped, and the signal number or numbers that you want to
trap, as shown here in
Listing 13
.
You can trap any signal in this way and it can be a good way of
ensuring that a program that crashes out is caught and trapped
effectively and reported.
Identifying reportable errors
Throughout this article, you've looked at ways of trapping errors,
saving the output, and recording issues so that they can be dealt with
and reported. However, what if the script or commands that you are using
naturally output error information that you want to be able to use and
report on but that you don't always want to know about?
There is no easy solution to this problem, but you can use a
combination of the techniques shown in this article to log errors and
information, read or filter the information, and mail and report or
display it accordingly.
A simple way to do this is to choose which parts of the command that
you output and report to the logs. Alternatively, you can post-process
the logs to select or filter out the output that you need.
For example, say you have a script that builds a document in the
background using the Formatting Objects Processor (FOP) system from
Apache to generate a PDF version of the document. Unfortunately in the
process, a number of errors are generated about hyphenation. These are
errors that you know about, but they don't affect the output quality. In
the script that generates the file, just filter out these lines from the
error log:
1
sed -e
'/hyphenation/d' <error.log
>mailerror.log
If there were no other errors, the mailerror.log file will be empty,
and email is sent with the error information.
Summary
In this article, you've looked at how to run commands in an unattended
script, captured their output, and monitored the execution of different
commands in the script. You can log the information in many ways, for
example, on a command-by-command or global basis, and check and report on
the progress.
For error trapping, you can monitor output and result codes, and you
can even set up global traps that identify problems and trap them during
execution for reporting purposes. The result is a range of options that
handle and report problems for scripts that are running on their own and
where their ability to recover from errors and problems is critical.
"the U.S. middle class - with household incomes ranging from two-thirds to double the national
median"
Median household income in the US in 2015 was less the $60K. Two-thirds is $40K. That's almost
poverty not middle class.
Sociologically the middle class is a quasi-elite of professionals and managers, who are largely
immune to economic downturns and trends such as out-sourcing.
The definition game? Define something to something else as is being talked about and then claim,
claims based on a completely different definition are false?
Actually with the change in ratio professionals and managers now tend to upper middle class, (29%
of us is upper middle now, 32% middle).
One of the influences is that post WWII it was possible to be middle class and work on an assembly
line in a job that was described as check your brain at the door. Automation and process changes
have wiped the high pay of such jobs out. Steel makers for example thru mainly process changes
(electric furnaces using scrap, continuous casting and the like) mean that it takes 1/5 the hours
to produce a ton of steel in did in the 1970s.
The movement of assembly line jobs to the middle class occured because there was a period where
the US was much less involved with the rest of the world economically, because their industries
had all been destroyed. The change started during the Johnson admin, and showed up in the high
inflation of the Nixon admin.
Most "professionals and managers" are nowhere near being immune to downturns and outsourcing,
in aggregate.
You could likewise claim that "low skilled" or any other occupations are "immune" as somewhere
around 70-80% of their members continue being employed through tough times, in aggregate.
If you take "tech", companies laying off around 5-10% or even more of their staff in busts
is a frequent enough occurrence. And that's in addition to the "regular" age discrimination and
cycling of workers justified with "outdated skills". Being young and (supposedly) impressionable
is a skill!
"the U.S. middle class - with household incomes ranging from two-thirds to double the national
median"
That's almost tautological. By definition, there can't be a whole lot of change in the population
of groups defined relative to median. Income and wealth of those groups, though, can be enlightening.
Substitute "mean" for "median" and watch what happens. When inequality is driven by extremes
at the tail, using "median" means that you don't see much change in the demographics. (Hint: if
"middle class" is defined as half to twice the average income, there are damned few in that bracket.)
"... Motivated empiricism, which is what he is describing, is just as misleading as ungrounded theorizing unsupported by empirical
data. Indeed, even in the sciences with well established, strong testing protocols are suffering from a replication crisis. ..."
"... I liked the Dorman piece at Econospeak as well. He writes well and explains things well in a manner that makes it easy for
non-experts to understand. ..."
Motivated empiricism, which is what he is describing, is just as misleading as ungrounded theorizing unsupported by empirical
data. Indeed, even in the sciences with well established, strong testing protocols are suffering from a replication crisis.
DevOps (a clipped compound of "software DEVelopment" and "information technology
OPerationS") is a term used to refer to a set of practices that emphasize the collaboration and
communication of both software developers and information technology (IT) professionals while
automating the process of software delivery and infrastructure changes.[1][2]
In traditional, functionally-separated organizations, there is rarely a
cross-departmental integration of these functions with IT operations. But DevOps promotes a set
of processes and methods for thinking about communication and collaboration – between departments
of development, QA (quality assurance), and IT operations.[6] In some organisations, this
collaboration involves embedding IT operations specialists within software development teams,
thus forming a cross-functional team – this may also be combined with matrix management.
At the Agile 2008 conference, Andrew Clay Shafer and Patrick Debois discussed "Agile
Infrastructure".[7] The term DevOps was popularized through a series of "devopsdays" starting in
2009 in Belgium.[8] Since then, there have been devopsdays conferences, held in many countries,
worldwide.[9]
The popularity of DevOps has grown in recent years, inspiring many other tangential movements
including OpsDev and WinOps.[10] WinOps embodies the same set of practices and emphasis on
culture as DevOps, but is specific for a Microsoft-centric view.[11]
Because DevOps is a cultural shift and collaboration (between development, operations
and testing), there is no single "DevOps tool": it is rather a set (or "DevOps
toolchain"), consisting of multiple tools.[12]
Generally, DevOps tools fit into one or more of these categories, which is reflective of
key aspects of the
software development and
delivery process:[13][14]
Code - Code development and review,
version control tools, code merging;
Though there are many tools available, certain categories of them are essential in
the DevOps toolchain setup for use in an organization. Some attempts to identify those
basic tools can be found in the existing literature.[15]
Tools such as
Docker (containerization),
Jenkins (continuous integration),
Puppet (Infrastructure as Code) and
Vagrant (virtualization platform)-among many others-are often used and frequently
referenced in DevOps tooling discussions.[16]
Continuous delivery and DevOps are similar in their meanings (and are, often, conflated), but
they are two different concepts:[19]
This question already has an answer here: How to delete files older than X hours
I have this command that I run every 24 hours currently.
find /var/www/html/audio -daystart -maxdepth 1 -mtime +1 -type f -name "*.mp3" -exec rm -f {} \;
I would like to run it every 1 hour and delete files that are older than 1 hour.
could I just use -mmin +59?
This question has been asked before and already has an answer. If those answers do not fully address
your question, please ask a new question.
If you are using GNU find (and you most likely are) you can also pass the -delete flag instead of
the -exec rm business. I think that more clearly expresses the intent. – Joost Baaij Nov 16 '11 at
10:32
From man find: -mmin n
File's data was last modified n minutes ago.
Also, make sure to test this first! ... -exec echo rm -f '{}' \;
^^^^ Add the 'echo' so you just see the commands that are going to get
run instead of actual trying them first.
Sean Bright
Wouldn't -mmin 60 only find the files modified exactly 60 minutes ago? I think it needs to be -mmin
+59 or such. – Otis Feb 12 '09 at 23:17
I updated based on Otis' comments. Nice catch! – Sean Bright Feb 12 '09 at 23:21
Thanks. :) I'm curious if the modification needs to be 60 minutes or greater or if 59m 1s would trip
it. I'm not sure it needs to be that precise for what Abs is doing. – Otis Feb 12 '09 at 23:24
I'll let you know in 54 minutes and 12 seconds ;-) Otis++ on a random post of yours – Sean Bright
Feb 12 '09 at 23:25
instead of -exec rm -f {} \; you can simply use -delete – denis2342 Nov 26 '13 at 9:11
So far we have seen two types of variables:
character strings and integers. The third type of variable the Korn shell supports is an
array
. As you may know, an array is like a list of things; you
can refer to specific elements in an array with integer
indices
,
so that
a[i]
refers to the
i
th element
of array
a
.
The Korn shell provides an array facility that, while useful, is much more
limited than analogous features in conventional programming languages. In particular,
arrays can be only one-dimensional (i.e., no arrays of arrays), and they are limited to
1024 elements. Indices can start at 0.
There are two ways to assign
values to elements of an array. The first is the most intuitive: you can use the standard
shell variable assignment syntax with the array index in brackets (
[]
). For example:
nicknames[2]=bob
nicknames[3]=ed
puts the values
bob
and
ed
into the elements of the array
nicknames
with indices 2 and 3, respectively. As with regular shell variables, values
assigned to array elements are treated as character strings unless the assignment is
preceded by
let
.
creates the array
aname
(if it doesn't already
exist) and assigns
val1
to
aname[0]
,
val2
to
aname[1]
, etc. As you would
guess, this is more convenient for loading up an array with an initial set of values.
To extract a value from an
array, use the syntax
${
aname
[
i
]}
. For example,
${nicknames[2]}
has the value "bob". The index
i
can be an arithmetic expression-see above.
If you use
*
in place of the index, the value will be all
elements, separated by spaces. Omitting the index is the same as specifying index 0.
Now we come to the somewhat unusual aspect of Korn shell arrays. Assume
that the only values assigned to
nicknames
are the two we saw
above. If you type
print
"
${nicknames[
*
]}"
, you will see the output:
bob ed
In other words,
nicknames[0]
and
nicknames[1]
don't exist. Furthermore, if you were to type:
nicknames[9]=pete
nicknames[31]=ralph
and then type
print
"
${nicknames[
*
]}"
, the output would look like this:
bob ed pete ralph
This is why we said "the elements of
nicknames
with indices 2 and 3" earlier, instead of "the 2nd and 3rd elements of
nicknames
". Any array elements with unassigned values just
don't exist; if you try to access their values, you will get null strings.
You can preserve whatever whitespace you put
in your array elements by using
"
$
{
aname
[@]
}
"
(with the double quotes) instead of
$
{
aname
[
*
]
}
"
, just as you can with
"
$@
"
instead of
$
*
.
The shell provides an operator that tells you
how many elements an array has defined:
${#
aname
[
*
]
}
. Thus
${#nicknames[
*
]
}
has the value 4. Note that
you need the
[
*
]
because the name of the array alone is interpreted as the
0th element. This means, for example, that
${#nicknames}
equals the length of
nicknames[0]
(see
Chapter 4
). Since
nicknames[0]
doesn't exist, the value
of
${#nicknames}
is 0, the length of the null string.
To be quite frank, we feel that the Korn shell's array facility is of
little use to shell programmers. This is partially because it is so limited, but mainly
because shell programming tasks are much more often oriented toward character strings and
text than toward numbers. If you think of an array as a mapping from integers to values
(i.e., put in a number, get out a value), then you can see why arrays are
"number-dominated" data structures.
Nevertheless, we can find useful things to do with arrays.
For example,
here is a cleaner solution to Task 5-4, in which a user can select his or her terminal type
(
TERM
environment variable) at login time.
Recall that the "user-friendly" version of
this code used
select
and a
case
statement:
print 'Select your terminal type:'
PS3='terminal? '
select term in
'Givalt GL35a' \
'Tsoris T-2000' \
'Shande 531' \
'Vey VT99'
do
case $REPLY in
1 ) TERM=gl35a ;;
2 ) TERM=t2000 ;;
3 ) TERM=s531 ;;
4 ) TERM=vt99 ;;
* ) print "invalid." ;;
esac
if [[ -n $term ]]; then
print "TERM is $TERM"
break
fi
done
We can eliminate the entire
case
construct by taking advantage of the fact that the
select
construct stores the user's number choice in the
variable
REPLY
. We just need a line of code that stores all
of the possibilities for
TERM
in an array, in an order that
corresponds to the items in the
select
menu. Then we can use
$REPLY
to index the array. The resulting code is:
set -A termnames gl35a t2000 s531 vt99
print 'Select your terminal type:'
PS3='terminal? '
select term in
'Givalt GL35a' \
'Tsoris T-2000' \
'Shande 531' \
'Vey VT99'
do
if [[ -n $term ]]; then
TERM=${termnames[REPLY-1]}
print "TERM is $TERM"
break
fi
done
This code sets up the array
termnames
so that
${termnames[0]}
is "gl35a",
${termnames[1]}
is "t2000", etc. The line
TERM=${termnames[REPLY-1]}
essentially replaces the entire
case
construct by using
REPLY
to index the array.
Notice that the shell knows to interpret the text in an array index as an
arithmetic expression, as if it were enclosed in
((
and
))
, which in turn means that variable need not be preceded by
a dollar sign (
$
). We have to subtract 1 from the value of
REPLY
because array indices start at 0, while
select
menu item numbers start at 1.
The final Korn shell
feature that relates to the kinds of values that variables can hold is the
typeset
command. If you are a programmer, you might guess
that
typeset
is used to specify the
type
of a variable (integer, string, etc.); you'd be partially right.
typeset
is a rather
ad
hoc
collection of things that you can do to variables that restrict the kinds of
values they can take. Operations are specified by options to
typeset
; the basic syntax is:
typeset
-o varname
[=
value
]
Options can be combined; multiple
varname
s
can be used. If you leave out
varname
, the shell prints a
list of variables for which the given option is turned on.
The options available break down into two basic categories:
String formatting operations, such as right- and left-justification,
truncation, and letter case control.
Type and attribute functions that are of primary interest to advanced
programmers.
typeset
without options has an important meaning: if a
typeset
statement is inside a function definition, then the
variables involved all become
local
to that function (in
addition to any properties they may take on as a result of
typeset
options). The ability to define variables that are local to "subprogram"
units (procedures, functions, subroutines, etc.) is necessary for writing large
programs, because it helps keep subprograms independent of the main program and of each
other.
If you just want to declare a variable local to a function, use
typeset
without any options. For example:
function afunc {
typeset diffvar
samevar=funcvalue
diffvar=funcvalue
print "samevar is $samevar"
print "diffvar is $diffvar"
}
samevar=globvalue
diffvar=globvalue
print "samevar is $samevar"
print "diffvar is $diffvar"
afunc
print "samevar is $samevar"
print "diffvar is $diffvar"
This code will print the following:
samevar is globvalue
diffvar is globvalue
samevar is funcvalue
diffvar is funcvalue
samevar is funcvalue
diffvar is globvalue
The expression $(($OPTIND - 1)) in the last example gives a clue as to how the shell can
do integer arithmetic. As you might guess, the shell interprets words surrounded by $(( and
)) as arithmetic expressions. Variables in arithmetic expressions do not need to be
preceded by dollar signs, though it is not wrong to do so.
Arithmetic expressions are evaluated inside double quotes, like tildes, variables, and command
substitutions. We're finally in a position to state the definitive rule about quoting strings:
When in doubt, enclose a string in single quotes, unless it contains tildes or any expression involving
a dollar sign, in which case you should use double quotes.
date (1) command on System V-derived versions of UNIX accepts arguments that tell it how
to format its output. The argument +%j tells it to print the day of the year, i.e., the number
of days since December 31st of the previous year.
We can use +%j to print a little holiday anticipation message:
print "Only $(( (365-$(date +%j)) / 7 )) weeks until the New Year!"
We'll show where this fits in the overall scheme of command-line processing in Chapter 7, Input/Output
and Command-line Processing .
The arithmetic expression feature is built in to the Korn shell's syntax, and was available in
the Bourne shell (most versions) only through the external command expr (1). Thus it is yet
another example of a desirable feature provided by an external command (i.e., a syntactic kludge)
being better integrated into the shell. [[ / ]] and getopts are also examples of this
design trend.
Korn shell arithmetic expressions are equivalent to their counterparts in the C language. [5]
Precedence and associativity are the same as in C. Table 6.2 shows the arithmetic operators that
are supported. Although some of these are (or contain) special characters, there is no need to backslash-escape
them, because they are within the $(( ... )) syntax.
[5] The assignment forms of these operators are also permitted. For example, $((x += 2))
adds 2 to x and stores the result back in x .
Table 6.2: Arithmetic Operators
Operator
Meaning
+
Plus
-
Minus
*
Times
/
Division (with truncation)
%
Remainder
<<
Bit-shift left
>>
Bit-shift right
&
Bitwise and
|
Bitwise or
~
Bitwise not
^
Bitwise exclusive or
Parentheses can be used to group subexpressions. The arithmetic expression syntax also (like C)
supports relational operators as "truth values" of 1 for true and 0 for false. Table 6.3 shows the
relational operators and the logical operators that can be used to combine relational expressions.
Table 6.3: Relational Operators
Operator
Meaning
<
Less than
>
Greater than
<=
Less than or equal
>=
Greater than or equal
==
Equal
!=
Not equal
&&
Logical and
||
Logical or
For example, $((3 > 2)) has the value 1; $(( (3 > 2) || (4 <= 1) )) also has the
value 1, since at least one of the two subexpressions is true.
The shell also supports base N numbers, where N can be up to 36. The notation
B#N means " N base B ". Of course, if you omit the B
# , the base defaults to 10.
6.2.1 Arithmetic Conditionals
Another construct, closely related to $((...)) , is ((...)) (without the leading
dollar sign). We use this for evaluating arithmetic condition tests, just as [[...]] is used
for string, file attribute, and other types of tests.
((...)) evaluates relational operators differently from $((...)) so that you can
use it in if and while constructs. Instead of producing a textual result, it just sets
its exit status according to the truth of the expression: 0 if true, 1 otherwise. So, for example,
((3 > 2)) produces exit status 0, as does (( (3 > 2) || (4 <= 1) )) , but (( (3
> 2) && (4 <= 1) )) has exit status 1 since the second subexpression isn't true.
You can also use numerical values for truth values within this construct. It's like the analogous
concept in C, which means that it's somewhat counterintuitive to non-C programmers: a value of 0
means false (i.e., returns exit status 1), and a non-0 value means true (returns exit
status 0), e.g., (( 14 )) is true. See the code for the kshdb debugger in Chapter 9
for two more examples of this.
6.2.2 Arithmetic Variables and Assignment
The (( ... )) construct can also be used to define integer variables and assign
values to them. The statement:
(( intvar=expression))
creates the integer variable intvar (if it doesn't already exist) and assigns to it the
result of expression .
That syntax isn't intuitive, so the shell provides a better equivalent: the built-in command
let . The syntax is:
let intvar=expression
It is not necessary (because it's actually redundant) to surround the expression with $((
and )) in a let statement. As with any variable assignment, there must not be any
space on either side of the equal sign ( = ). It is good practice to surround expressions
with quotes, since many characters are treated as special by the shell (e.g., * ,
# , and parentheses); furthermore, you must quote expressions that include whitespace (spaces
or TABs). See Table 6.4 for examples.
Table 6.4: Sample Integer Expression Assignments
Assignment
Value
let x=
$x
1+4
5
' 1 + 4 '
5
' (2+3) * 5 '
25
' 2 + 3 * 5 '
17
' 17 / 3 '
5
' 17 % 3 '
2
' 1<<4 '
16
' 48>>3 '
6
' 17 & 3 '
1
' 17 | 3 '
19
' 17 ^ 3 '
18
Here is a small task that makes use of integer arithmetic.
Task 6.1
Write a script called pages that, given the name of a text file, tells how many pages
of output it contains. Assume that there are 66 lines to a page but provide an option allowing
the user to override that.
We'll make our option -N , a la head . The syntax for this single option
is so simple that we need not bother with getopts . Here is the code:
if [[ $1 = -+([0-9]) ]]; then
let page_lines=${1#-}
shift
else
let page_lines=66
fi
let file_lines="$(wc -l < $1)"
let pages=file_lines/page_lines
if (( file_lines % page_lines > 0 )); then
let pages=pages+1
fi
print "$1 has $pages pages of text."
Notice that we use the integer conditional (( file_lines % page_lines > 0 )) rather than
the [[ ... ]] form.
At the heart of this code is the UNIX utility wc(1) , which counts the number of lines,
words, and characters (bytes) in its input. By default, its output looks something like this:
8 34 161 bob
wc 's output means that the file bob has 8 lines, 34 words, and 161 characters.
wc recognizes the options -l , -w , and -c , which tell it to print only
the number of lines, words, or characters, respectively.
wc normally prints the name of its input file (given as argument). Since we want only the
number of lines, we have to do two things. First, we give it input from file redirection instead,
as in wc -l < bob instead of wc -l bob . This produces the number of lines preceded
by a single space (which would normally separate the filename from the number).
Unfortunately, that space complicates matters: the statement let file_lines=$(wc -l < $1)
becomes "let file_lines= N " after command substitution; the space after the equal sign
is an error. That leads to the second modification, the quotes around the command substitution expression.
The statement let file_lines="N" is perfectly legal, and let knows
how to remove the leading space.
The first if clause in the pages script checks for an option and, if it was given,
strips the dash ( - ) off and assigns it to the variable page_lines . wc in
the command substitution expression returns the number of lines in the file whose name is given as
argument.
The next group of lines calculates the number of pages and, if there is a remainder after the
division, adds 1. Finally, the appropriate message is printed.
As a bigger example of integer arithmetic, we will complete our emulation of the C shell's
pushd and popd functions (Task 4-8). Remember that these functions operate on DIRSTACK
, a stack of directories represented as a string with the directory names separated by spaces.
The C shell's pushd and popd take additional types of arguments, which are:
pushd +n takes the n th directory in the stack (starting with 0), rotates it
to the top, and cd s to it.
pushd without arguments, instead of complaining, swaps the two top directories on the
stack and cd s to the new top.
popd +n takes the n th directory in the stack and just deletes it.
The most useful of these features is the ability to get at the n th directory in the stack.
Here are the latest versions of both functions:
function pushd { # push current directory onto stack
dirname=$1
if [[ -d $dirname && -x $dirname ]]; then
cd $dirname
DIRSTACK="$dirname ${DIRSTACK:-$PWD}"
print "$DIRSTACK"
else
print "still in $PWD."
fi
}
function popd { # pop directory off the stack, cd to new top
if [[ -n $DIRSTACK ]]; then
DIRSTACK=${DIRSTACK#* }
cd ${DIRSTACK%% *}
print "$PWD"
else
print "stack empty, still in $PWD."
fi
}
To get at the n th directory, we use a while loop that transfers the top directory
to a temporary copy of the stack n times. We'll put the loop into a function called getNdirs
that looks like this:
function getNdirs{
stackfront=''
let count=0
while (( count < $1 )); do
stackfront="$stackfront ${DIRSTACK%% *}"
DIRSTACK=${DIRSTACK#* }
let count=count+1
done
}
The argument passed to getNdirs is the n in question. The variable stackfront
is the temporary copy that will contain the first n directories when the loop is done.
stackfront starts as null; count , which counts the number of loop iterations, starts
as 0.
The first line of the loop body appends the top of the stack ( ${DIRSTACK%%*
} ) to stackfront ; the second line deletes the top from the stack. The last
line increments the counter for the next iteration. The entire loop executes N times, for
values of count from 0 to N -1.
When the loop finishes, the last directory in $stackfront is the N th directory.
The expression ${stackfront##* } extracts this directory. Furthermore,
DIRSTACK now contains the "back" of the stack, i.e., the stack without the first
n directories. With this in mind, we can now write the code for the improved versions of pushd
and popd :
function pushd {
if [[ $1 = ++([0-9]) ]]; then
# case of pushd +n: rotate n-th directory to top
let num=${1#+}
getNdirs $num
newtop=${stackfront##* }
stackfront=${stackfront%$newtop}
DIRSTACK="$newtop $stackfront $DIRSTACK"
cd $newtop
elif [[ -z $1 ]]; then
# case of pushd without args; swap top two directories
firstdir=${DIRSTACK%% *}
DIRSTACK=${DIRSTACK#* }
seconddir=${DIRSTACK%% *}
DIRSTACK=${DIRSTACK#* }
DIRSTACK="$seconddir $firstdir $DIRSTACK"
cd $seconddir
else
cd $dirname
# normal case of pushd dirname
dirname=$1
if [[ -d $dirname && -x $dirname ]]; then
DIRSTACK="$dirname ${DIRSTACK:-$PWD}"
print "$DIRSTACK"
else
print still in "$PWD."
fi
fi
}
function popd { # pop directory off the stack, cd to new top
if [[ $1 = ++([0-9]) ]]; then
# case of popd +n: delete n-th directory from stack
let num={$1#+}
getNdirs $num
stackfront=${stackfront% *}
DIRSTACK="$stackfront $DIRSTACK"
else
# normal case of popd without argument
if [[ -n $DIRSTACK ]]; then
DIRSTACK=${DIRSTACK#* }
cd ${DIRSTACK%% *}
print "$PWD"
else
print "stack empty, still in $PWD."
fi
fi
}
These functions have grown rather large; let's look at them in turn. The if at the beginning
of pushd checks if the first argument is an option of the form +N . If so,
the first body of code is run. The first let simply strips the plus sign (+) from the argument
and assigns the result - as an integer - to the variable num . This, in turn, is passed to
the getNdirs function.
The next two assignment statements set newtop to the N th directory - i.e., the
last directory in $stackfront - and delete that directory from stackfront . The final
two lines in this part of pushd put the stack back together again in the appropriate order
and cd to the new top directory.
The elif clause tests for no argument, in which case pushd should swap the top two
directories on the stack. The first four lines of this clause assign the top two directories to
firstdir and seconddir , and delete these from the stack. Then, as above, the code
puts the stack back together in the new order and cd s to the new top directory.
The else clause corresponds to the usual case, where the user supplies a directory name
as argument.
popd works similarly. The if clause checks for the +N option, which
in this case means delete the N th directory. A let extracts the N as an integer;
the getNdirs function puts the first n directories into stackfront . Then the
line stackfront=${stackfront% *} deletes the last directory (the N th directory) from
stackfront . Finally, the stack is put back together with the N th directory missing.
The else clause covers the usual case, where the user doesn't supply an argument.
Before we leave this subject, here are a few exercises that should test your understanding of
this code:
Add code to pushd that exits with an error message if the user supplies no argument
and the stack contains fewer than two directories.
Verify that when the user specifies +N and N exceeds the number of directories
in the stack, both pushd and popd use the last directory as the N th directory.
Modify the getNdirs function so that it checks for the above condition and exits with
an appropriate error message if true.
Change getNdirs so that it uses cut (with command substitution), instead of the
while loop, to extract the first N directories. This uses less code but runs more
slowly because of the extra processes generated.
"... His prescription in the end is the old and tired "invest in education and retraining", i.e. "symbolic analyst jobs will replace the lost jobs" like they have for decades (not). ..."
"... "Governments will, however, have to concern themselves with problems of structural joblessness. They likely will need to take a more explicit role in ensuring full employment than has been the practice in the US." ..."
"... Instead, we have been shredding the safety net and job training / creation programs. There is plenty of work that needs to be done. People who have demand for goods and services find them unaffordable because the wealthy are capturing all the profits and use their wealth to capture even more. Trade is not the problem for US workers. Lack of investment in the US workforce is the problem. We don't invest because the dominant white working class will not support anything that might benefit blacks and minorities, even if the major benefits go to the white working class ..."
"... Really nice if your sitting in the lunch room of the University. Especially if you are a member of the class that has been so richly awarded, rather than the class who paid for it. Humph. The discussion is garbage, Political opinion by a group that sat by ... The hypothetical nuance of impossible tax policy. ..."
"... The concept of Robots leaving us destitute, is interesting. A diversion. It ain't robots who are harvesting the middle class. It is an entitled class of those who gave so little. ..."
"... Summers: "Let them eat training." ..."
"... Suddenly then, Bill Gates has become an accomplished student of public policy who can command an audience from Lawrence Summers who was unable to abide by the likes of the prophetic Brooksley Born who was chair of the Commodity Futures Trading Commission or the prophetic professor Raghuram Rajan who would become Governor of the Reserve Bank of India. Agreeing with Bill Gates however is a "usual" for Summers. ..."
"... Until about a decade or so ago many states I worked in had a "tangible property" or "personal property" tax on business equipment, and sometimes on equipment + average inventory. Someday I will do some research and see how many states still do this. Anyway a tax on manufacturing equipment, retail fixtures and computers and etc. is hardly novel or unusual. So why would robots be any different? ..."
"... Thank you O glorious technocrats for shining the light of truth on humanity's path into the future! Where, oh where, would we be without our looting Benevolent Overlords and their pompous lapdogs (aka Liars in Public Places)? ..."
"... While he is overrated, he is not completely clueless. He might well be mediocre (or slightly above this level) but extremely arrogant defender of the interests of neoliberal elite. Rubin's boy Larry as he was called in the old days. ..."
"... BTW he was Rubin's hatchet man for eliminating Brooksley Born attempt to regulate the derivatives and forcing her to resign: ..."
Larry Summers:
Robots
are wealth creators and taxing them is illogical : I usually agree with Bill Gates on matters
of public policy and admire his emphasis on the combined power of markets and technology. But I think
he went seriously astray in a recent interview when he proposed, without apparent irony, a tax on
robots to cushion worker dislocation and limit inequality. ....
Has Summers gone all supply-side on his? Start with his title:
"Robots are wealth creators and taxing them is illogical"
I bet Bill Gates might reply – "my company is a wealth creator so it should not be taxed".
Oh wait – Microsoft is already shifting profits to tax havens. Summers states:
"Third, and perhaps most fundamentally, why tax in ways that reduce the size of the pie rather
than ways that assure that the larger pie is well distributed? Imagine that 50 people can produce
robots who will do the work of 100. A sufficiently high tax on robots would prevent them from
being produced."
Summers makes one, and only one, good and relevant point - that in many cases, robots/automation
will not produce more product from the same inputs but better products. That's in his words; I
would replace "better" with "more predictable quality/less variability" - in both directions.
And that the more predictable quality aspect is hard or impossible to distinguish from higher
productivity (in some cases they may be exactly the same, e.g. by streamlining QA and reducing
rework/pre-sale repairs).
His prescription in the end is the old and tired "invest in education and retraining", i.e.
"symbolic analyst jobs will replace the lost jobs" like they have for decades (not).
Pundits do not write titles, editors do. Tax the profits, not the robots.
The crux of the argument is this:
"Governments will, however, have to concern themselves with problems of structural joblessness.
They likely will need to take a more explicit role in ensuring full employment than has been
the practice in the US."
Instead, we have been shredding the safety net and job training / creation programs. There
is plenty of work that needs to be done. People who have demand for goods and services find them
unaffordable because the wealthy are capturing all the profits and use their wealth to capture
even more. Trade is not the problem for US workers. Lack of investment in the US workforce is
the problem. We don't invest because the dominant white working class will not support anything
that might benefit blacks and minorities, even if the major benefits go to the white working class
In principle taxing profits is preferable, but has a few downsides/differences:
Profit taxes cannot be "earmarked" with the same *justification* as automation taxes
Profits may actually not increase after the automation - initially because of write-offs,
and then because of pricing in (and perhaps the automation was installed in response to external
market pressures to begin with).
Profits can be shifted/minimized in ways that automation cannot - either you have the robots
or not. Taxing the robots will discourage automation (if that is indeed the goal, or is considered
a worthwhile goal).
Not very strong points, and I didn't read the Gates interview so I don't know his detailed
motivation to propose specifically a robot tax.
When I was in Amsterdam a few years ago, they had come up with another perfidious scheme to cut
people out of the loop or "incentivize" people to use the machines - in a large transit center,
you could buy tickets at a vending machine or a counter with a person - and for the latter you
would have to pay a not-so-modest "personal service" surcharge (50c for a EUR 2-3 or so ticket
- I think it was a flat fee, but may have been staggered by type of service).
Maybe I misunderstood it and it was a "congestion charge" to prevent lines so people who have
to use counter service e.g. with questions don't have to wait.
And then you may have heard (in the US) the term "convenience fee" which I found rather insulting
when I encountered it. It suggests you are charged for your convenience, but it is to cover payment
processor costs (productivity enhancing automation!).
And then you may have heard (in the US) the term "convenience fee" which I found rather insulting
when I encountered it. It suggests you are charged for your convenience, but it is to cover payment
processor costs (productivity enhancing automation!)
Lack of adequate compensation to the lower half of the job force is the problem. Lack of persistent
big macro demand is the problem . A global traiding system that doesn't automatically move forex
rates toward universal. Trading zone balance and away from persistent surplus and deficit traders
is the problem
Technology is never the root problem. Population dynamics is never the root problem
Really nice if your sitting in the lunch room of the University. Especially if you are a member
of the class that has been so richly awarded, rather than the class who paid for it. Humph. The
discussion is garbage, Political opinion by a group that sat by ... The hypothetical nuance of
impossible tax policy.
The concept of Robots leaving us destitute, is interesting. A diversion. It ain't robots who are
harvesting the middle class. It is an entitled class of those who gave so little.
After one five axis CNC cell replaces 5 other machines and 4 of the workers, what happens to
the four workers?
The issue is the efficiency achieved through better through put forcing the loss of wages.
If you use the 5-axis CNC, tax the output from it no more than what would have been paid to the
4 workers plus the Overhead for them. The Labor cost plus the Overhead Cost is what is eliminated
by the 5-Axis CNC.
Ouch. The Wall Street Journal's Real Time Economics blog has a post * linking to Raghuram Rajan's
prophetic 2005 paper ** on the risks posed by securitization - basically, Rajan said that what
did happen, could happen - and to the discussion at the Jackson Hole conference by Federal Reserve
vice-chairman Don Kohn *** and others. **** The economics profession does not come off very well.
Two things are really striking here. First is the obsequiousness toward Alan Greenspan. To
be fair, the 2005 Jackson Hole event was a sort of Greenspan celebration; still, it does come
across as excessive - dangerously close to saying that if the Great Greenspan says something,
it must be so. Second is the extreme condescension toward Rajan - a pretty serious guy - for having
the temerity to suggest that maybe markets don't always work to our advantage. Larry Summers,
I'm sorry to say, comes off particularly badly. Only my colleague Alan Blinder, defending Rajan
"against the unremitting attack he is getting here for not being a sufficiently good Chicago economist,"
emerges with honor.
No, his argument is much broader. Summers stops at "no new taxes and education/retraining". And
I find it highly dubious that compensation/accommodation for workers can be adequately funded
out of robot taxes.
We should never assign a social task to the wrong institution. Firms should be unencumbered by draconian hire and fire constraints. The state should provide the compensation for lay offs and firings.
The state should maintain an adequate local Beveridge ratio of job openings to
Job applicants
Firms task is productivity max subject to externality off sets. Including output price changed. And various other third party impacts
Suddenly then, Bill Gates has become an accomplished student of public policy who can command
an audience from Lawrence Summers who was unable to abide by the likes of the prophetic Brooksley
Born who was chair of the Commodity Futures Trading Commission or the prophetic professor Raghuram
Rajan who would become Governor of the Reserve Bank of India. Agreeing with Bill Gates however
is a "usual" for Summers.
Until about a decade or so ago many states I worked in had a "tangible property" or "personal
property" tax on business equipment, and sometimes on equipment + average inventory. Someday I
will do some research and see how many states still do this. Anyway a tax on manufacturing equipment,
retail fixtures and computers and etc. is hardly novel or unusual. So why would robots be any
different?
I suspect it is the motivation of Gates as in what he would do with the tax revenue. And Gates
might be thinking of a higher tax rate for robots than for your garden variety equipment.
Yes some equipment in side any one firm compliments existing labor inside that firm including
already installed robots Robots new robots are rivals
Rivals that if subject to a special " introduction tax " Could deter installation
As in
The 50 for 100 swap of the 50 hours embodied in the robot
Replace 100. Similarly paid production line labor
But ...
There's a 100 % plusher chase tax on the robots
Why bother to invest in the productivity increase
If here are no other savings
Bill Gates Wants to Undermine Donald Trump's Plans for Growing the Economy
Yes, as Un-American as that may sound, Bill Gates is proposing * a tax that would undermine
Donald Trump's efforts to speed the rate of economic growth. Gates wants to tax productivity growth
(also known as "automation") slowing down the rate at which the economy becomes more efficient.
This might seem a bizarre policy proposal at a time when productivity growth has been at record
lows, ** *** averaging less than 1.0 percent annually for the last decade. This compares to rates
of close to 3.0 percent annually from 1947 to 1973 and again from 1995 to 2005.
It is not clear if Gates has any understanding of economic data, but since the election of
Donald Trump there has been a major effort to deny the fact that the trade deficit has been responsible
for the loss of manufacturing jobs and to instead blame productivity growth. This is in spite
of the fact that productivity growth has slowed sharply in recent years and that the plunge in
manufacturing jobs followed closely on the explosion of the trade deficit, beginning in 1997.
[Manufacturing Employment, 1970-2017]
Anyhow, as Paul Krugman pointed out in his column **** today, if Trump is to have any hope
of achieving his growth target, he will need a sharp uptick in the rate of productivity growth
from what we have been seeing. Bill Gates is apparently pushing in the opposite direction.
Yes, it's far better that our betters in the upper class get all the benefits from productivity
growth. Without their genetic entitlement to wealth others created, we would just be savages murdering
one another in the streets.
These Masters of the Universe of ours put the 'civil' in our illustrious civilization. (Sure
it's a racist barbarian concentration camp on the verge of collapse into fascist revolutions and
world war. But, again, far better than people murdering one another in the streets!)
People who are displaced from automation are simply moochers and it's only right that they
are cut out of the economy and left to die on the streets. This is the law of Nature: survival
of the fittest. Social Darwinism is inescapable. It's what makes us human!
Instead of just waiting for people displaced from automation to die on the streets, we should
do the humane thing and establish concentration camps so they are quickly dispatched to the Void.
(Being human means being merciful!)
Thank you O glorious technocrats for shining the light of truth on humanity's path into
the future! Where, oh where, would we be without our looting Benevolent Overlords and their pompous
lapdogs (aka Liars in Public Places)?
I think it would be good if the tax was used to help dislocated workers and help with inequality
as Gates suggests. However Summers and Baker have a point that it's odd to single out robots when
you could tax other labor-saving, productivity-enhancing technologies as well.
Baker suggests taxing profits instead. I like his idea about the government taking stock of
companies and collecting taxes that way.
"They likely will need to take a more explicit role in ensuring full employment than has been
the practice in the US.
Among other things, this will mean major reforms of education and retraining systems, consideration
of targeted wage subsidies for groups with particularly severe employment problems, major investments
in infrastructure and, possibly, direct public employment programmes."
Not your usual neoliberal priorities. Compare with Hillary's program.
All taxes are a reallocation of wealth. Not taxing wealth creators is impossible.
On the other hand, any producer who is not taxed will expand at the expense of those producers
who are taxed. This we are seeing with respect to mechanical producers and human labor. Labor
is helping to subsidize its replacement.
Interesting that Summers apparently doesn't see this.
Substitute "impossible" with "bad policy" and you are spot on. Of course the entire Paul Ryan
agenda is to shift taxes from the wealthy high income to the rest of us.
Judging by the whole merit rhetoric and tying employability to "adding value", one could come
to the conclusion that most wealth is created by workers. Otherwise why would companies need to
employ them and wring their hands over skill shortages? Are you suggesting W-2 and payroll taxes
are bad policy?
Payroll taxes to fund Soc. Sec. benefits is a good thing. But when they are used to fund tax cuts
for the rich - not a good thing. And yes - wealth may be created by workers but it often ends
up in the hands of the "investor class".
Let's not conflate value added from value extracted. Profits are often pure economic rents. Very
often non supply regulating. The crude dynamics of market based pricing hardly presents. A sea
of close shaveed firms extracting only. Necessary incentivizing profits of enterprise
Profiteers extract far more value then they create. Of course disentangling system improving surplus
ie profits of enterprise
From the rest of the extracted swag. Exceeds existing tax systems capacity
One can make a solid social welfare case for a class of income stream
that amounts to a running residue out of revenue earned by the firm
above compensation to job holders in that firm
See the model of the recent oboe laureate
But that would amount to a fraction of existing corporate " earnings "
Errr extractions
Taking this in a different direction, does it strike anyone else as important that human beings
retain the knowledge of how to make the things that robots are tasked to produce?
The current generation of robots and automated equipment isn't intelligent and doesn't "know"
anything. People still know how to make the things, otherwise the robots couldn't be programmed.
However in probably many cases, doing the actual production manually is literally not humanly
possible. For example, making semiconductor chips or modern circuit boards requires machines -
they cannot be produced by human workers under any circumstances, as they require precision outside
the range of human capability.
Point taken but I was thinking more along the lines of knowing how to use a lathe or an end mill.
If production is reduced to a series of programming exercises then my sense is that society is
setting itself up for a nasty fall.
(I'm all for technology to the extent that it builds resilience. However, when it serves to
disconnect humans from the underlying process and reduces their role to simply knowledge workers,
symbolic analysts, or the like then it ceases to be net positive. Alternatively stated: Tech-driven
improvements in efficiency are good so long as they don't undermine overall societal resilience.
Be aware of your reliance on things you don't understand but whose function you take for granted.)
Gates almost certainly meant tax robots the way we are taxed. I doubt he meant tax the acquisition
of robots. We are taxed in complex ways, presumably robots will be as well.
Summers is surely using a strawman to make his basically well thought out arguments.
In any case, everyone is talking about distributional impacts of robots, but resource allocation
is surely to be as much or more impacted. What if robots only want to produce antennas and not
tomatoes? That might be a damn shame.
It all seems a tad early to worry about and it's hard to see how what ever the actual outcome
is, the frontier of possible outcomes has to be wildly improved.
Given recent developments in labor productivity Your Last phrase becomes a gem
That is If you end with "it's hard to see whatever the actual outcome is The frontier of possible
outcomes shouldn't be wildly improved By a social revolution "
Robots do not CREATE wealth. They transform wealth from one kind to another that subjectively
has more utility to robot user. Wealth is inherent in the raw materials, the knowledge, skill
and effort of the robot designers and fabricators, etc., etc.
While he is overrated, he is not completely clueless. He might well be mediocre (or slightly
above this level) but extremely arrogant defender of the interests of neoliberal elite. Rubin's
boy Larry as he was called in the old days.
BTW he was Rubin's hatchet man for eliminating Brooksley Born attempt to regulate the derivatives
and forcing her to resign:
== quote ==
"I walk into Brooksley's office one day; the blood has drained from her face," says Michael Greenberger,
a former top official at the CFTC who worked closely with Born. "She's hanging up the telephone;
she says to me: 'That was [former Assistant Treasury Secretary] Larry Summers. He says, "You're
going to cause the worst financial crisis since the end of World War II."... [He says he has]
13 bankers in his office who informed him of this. Stop, right away. No more.'"
Market is, at the end, a fully political construct. And what neoliberals like Summers promote
is politically motivated -- reflects the desires of the ruling neoliberal elite to redistribute
wealth up.
BTW there is a lot of well meaning (or fashion driven) idiotism that is sold in the USA as
automation, robots, move to cloud, etc. Often such fashion driven exercises cost company quite
a lot. But that's OK as long as bonuses are pocketed by top brass, and power of labor diminished.
Underneath of all the "robotic revolution" along with some degree of technological innovation
(mainly due to increased power of computers and tremendous progress in telecommunication technologies
-- not some breakthrough) is one big trend -- liquidation of good jobs and atomization of the
remaining work force.
A lot of motivation here is the old dirty desire of capital owners and upper management to
further to diminish the labor share. Another positive thing for capital owners and upper management
is that robots do not go on strike and do not demand wage increases. But the problem is that they
are not a consumers either. So robotization might bring the next Minsky moment for the USA economy
closer. Sighs of weakness of consumer demand are undeniable even now. Look at auto loan delinquency
rate as the first robin.
http://www.usatoday.com/story/money/cars/2016/02/27/subprime-auto-loan-delinquencies-hit-six-year-high/81027230/
== quote ==
The total of outstanding auto loans reached $1.04 trillion in the fourth-quarter of 2015, according
to the Federal Reserve Bank of St. Louis. About $200 billion of that would be classified as
subprime or deep subprime.
== end of quote ==
Summers as a staunch, dyed-in-the-wool neoliberal of course is against increasing labor share.
Actually here he went full into "supply sider" space -- making richer more rich will make us better
off too. Pgl already noted that by saying: "Has Summers gone all supply-side on his? Start with
his title"
BTW, there is a lot of crazy thing that are going on with the US large companies drive to diminish
labor share. Some o them became barely manageable and higher management has no clue what is happening
on the lower layers of the company.
The old joke was: GM does a lot of good things except making good cars. Now it can be expanded
to a lot more large US companies.
The "robot pressure" on labor is not new. It is actually the same old and somewhat dirty trick
as outsourcing. In this case outsourcing to robots. In other words "war of labor" by other means.
Two caste that neoliberalism created like in feudalism occupy different social spaces and one
is waging the war on other, under the smoke screen of "free market" ideology. As buffet remarked
"There's class warfare, all right, but it's my class, the rich class, that's making war, and we're
winning."
BTW successes in robotics are no so overhyped that it is not easy to distinguish where reality
ends and the hype starts.
In reality telecommunication revolution is probably more important in liquation of good jobs
in the USA. I think Jonny Bakho or somebody else commented on this, but I can't find the post.
Is your server or servers getting old? Have you pushed it
to the end of its lifespan? Have you reached that stage
where it's time to do something about it? Join the
crowd. You're now at that decision point that so many
other business people are finding themselves this year.
And the decision is this: do you replace that old server
with a new server or do you go to: the cloud.
Everyone's
talking about the cloud nowadays so you've got to consider
it, right? This could be a great new thing for your
company! You've been told that the cloud enables companies
like yours to be more flexible and save on their IT
costs. It allows free and easy access to data for
employees from wherever they are, using whatever devices
they want to use. Maybe you've seen the
recent
survey
by accounting software maker MYOB that found
that small businesses that adopt cloud technologies enjoy
higher revenues. Or perhaps you've stumbled on
this
analysis
that said that small businesses are losing
money as a result of ineffective IT management that could
be much improved by the use of cloud based services. Or
the
poll
of more than 1,200 small businesses by technology
reseller
CDW
which discovered that
" cloud users cite cost savings, increased efficiency and
greater innovation as key benefits" and that " across all
industries, storage and conferencing and collaboration are
the top cloud services and applications."
So it's time to chuck that old piece of junk and take
your company to the cloud, right? Well just hold on.
There's no question that if you're a startup or a very
small company or a company that is virtual or whose
employees are distributed around the world, a cloud based
environment is the way to go. Or maybe you've got high
internal IT costs or require more computing power. But
maybe that's not you. Maybe your company sells
pharmaceutical supplies, provides landscaping services,
fixes roofs, ships industrial cleaning agents,
manufactures packaging materials or distributes gaskets.
You are not featured in
Fast Company
and you have
not been invited to presenting at the next Disrupt
conference. But you know you represent the very core of
small business in America. I know this too. You are just
like one of my company's 600 clients. And what are these
companies doing this year when it comes time to replace
their servers?
These very smart owners and managers of small and
medium sized businesses who have existing applications
running on old servers are not going to the cloud.
Instead, they've been buying new servers.
Wait, buying new servers? What about the cloud?
At no less than six of my clients in the past 90 days
it was time to replace servers. They had all waited as
long as possible, conserving cash in a slow economy,
hoping to get the most out of their existing machines.
Sound familiar? But the servers were showing their age,
applications were running slower and now as the companies
found themselves growing their infrastructure their old
machines were reaching their limit. Things were getting
to a breaking point, and all six of my clients decided it
was time for a change. So they all moved to cloud, right?
Nope. None of them did. None of them chose the cloud.
Why? Because all six of these small business owners and
managers came to the same conclusion: it was just too
expensive. Sorry media. Sorry tech world. But this is
the truth. This is what's happening in the world of
established companies.
Consider the options. All of my clients' evaluated
cloud based hosting services from
Amazon
,
Microsoft
and
Rackspace
. They
also interviewed a handful of cloud based IT management
firms who promised to move their existing applications
(Office, accounting, CRM, databases) to their servers and
manage them offsite. All of these popular options are
viable and make sense, as evidenced by their growth in
recent years. But when all the smoke cleared, all of
these services came in at about the same price:
approximately $100 per month per user. This is what it
costs for an existing company to move their existing
infrastructure to a cloud based infrastructure in 2013.
We've got the proposals and we've done the analysis.
You're going through the same thought process, so now
put yourself in their shoes. Suppose you have maybe 20
people in your company who need computer access. Suppose
you are satisfied with your existing applications and
don't want to go through the agony and enormous expense of
migrating to a new cloud based application. Suppose you
don't employ a full time IT guy, but have a service
contract with a reliable local IT firm.
Now do the numbers: $100 per month x 20 users is
$2,000 per month or $24,000 PER YEAR for a cloud based
service. How many servers can you buy for that amount?
Imagine putting that proposal out to an experienced,
battle-hardened, profit generating small business owner
who, like all the smart business owners I know, look hard
at the return on investment decision before parting with
their cash.
For all six of these clients the decision was a
no-brainer: they all bought new servers and had their IT
guy install them. But can't the cloud bring down their IT
costs? All six of these guys use their IT guy for maybe
half a day a month to support their servers (sure he could
be doing more, but small business owners always try to get
away with the minimum). His rate is $150 per hour.
That's still way below using a cloud service.
No one could make the numbers work. No one could
justify the return on investment. The cloud, at least for
established businesses who don't want to change their
existing applications, is still just too expensive.
Please know that these companies are, in fact, using
some cloud-based applications. They all have virtual
private networks setup and their people access their
systems over the cloud using remote desktop technologies.
Like the respondents in the above surveys, they subscribe
to online backup services, share files on DropBox and
Microsoft
MSFT +1.45%
's
file storage, make their calls over Skype, take advantage
of Gmail and use collaboration tools like
Google
GOOG +1.45%
Docs or Box. Many of their employees have iPhones and
Droids and like to use mobile apps which rely on cloud
data to make them more productive. These applications
didn't exist a few years ago and their growth and benefits
cannot be denied.
Paul-Henri Ferrand, President of
Dell
DELL +%
North America, doesn't see this trend continuing. "Many
smaller but growing businesses are looking and/or moving
to the cloud," he told me. "There will be some (small
businesses) that will continue to buy hardware but I see
the trend is clearly toward the cloud. As more business
applications become more available for the cloud, the more
likely the trend will continue."
Dean Baker's screed, "Bill Gates Is Clueless On The Economy," keeps getting recycled, from
Beat the Press to Truthout to Real-World Economics Review to The Huffington Post. Dean waves aside
the real problem with Gates's suggestion, which is the difficulty of defining what a robot is,
and focuses instead on what seems to him to be the knock-down argument:
"Gates is worried that productivity growth is moving along too rapidly and that it will lead
to large scale unemployment.
"There are two problems with this story: First productivity growth has actually been very slow
in recent years. The second problem is that if it were faster, there is no reason it should lead
to mass unemployment."
Bill Gates Wants to Undermine Donald Trump's Plans for Growing the Economy
Yes, as Un-American as that may sound, Bill Gates is proposing * a tax that would undermine
Donald Trump's efforts to speed the rate of economic growth. Gates wants to tax productivity growth
(also known as "automation") slowing down the rate at which the economy becomes more efficient.
This might seem a bizarre policy proposal at a time when productivity growth has been at record
lows, ** averaging less than 1.0 percent annually for the last decade. This compares to rates
of close to 3.0 percent annually from 1947 to 1973 and again from 1995 to 2005.
It is not clear if Gates has any understanding of economic data, but since the election of
Donald Trump there has been a major effort to deny the fact that the trade deficit has been responsible
for the loss of manufacturing jobs and to instead blame productivity growth. This is in spite
of the fact that productivity growth has slowed sharply in recent years and that the plunge in
manufacturing jobs followed closely on the explosion of the trade deficit, beginning in 1997.
[Manufacturing Employment, 1970-2017]
Anyhow, as Paul Krugman pointed out in his column *** today, if Trump is to have any hope of
achieving his growth target, he will need a sharp uptick in the rate of productivity growth from
what we have been seeing. Bill Gates is apparently pushing in the opposite direction.
Bill Gates Is Clueless on the Economy
By Dean Baker
Last week Bill Gates called for taxing robots. * He argued that we should impose a tax on companies
replacing workers with robots and that the money should be used to retrain the displaced workers.
As much as I appreciate the world's richest person proposing a measure that would redistribute
money from people like him to the rest of us, this idea doesn't make any sense.
Let's skip over the fact of who would define what a robot is and how, and think about the logic
of what Gates is proposing. In effect, Gates wants to put a tax on productivity growth. This is
what robots are all about. They allow us to produce more goods and services with the same amount
of human labor. Gates is worried that productivity growth is moving along too rapidly and that
it will lead to large scale unemployment.
There are two problems with this story. First productivity growth has actually been very slow
in recent years. The second problem is that if it were faster, there is no reason it should lead
to mass unemployment. Rather, it should lead to rapid growth and increases in living standards.
Starting with the recent history, productivity growth has averaged less than 0.6 percent annually
over the last six years. This compares to a rate of 3.0 percent from 1995 to 2005 and also in
the quarter century from 1947 to 1973. Gates' tax would slow productivity growth even further.
It is difficult to see why we would want to do this. Most of the economic problems we face
are implicitly a problem of productivity growth being too slow. The argument that budget deficits
are a problem is an argument that we can't produce enough goods and services to accommodate the
demand generated by large budget deficits.
The often told tale of a demographic nightmare with too few workers to support a growing population
of retirees is also a story of inadequate productivity growth. If we had rapid productivity growth
then we would have all the workers we need.
In these and other areas, the conventional view of economists is that productivity growth is
too slow. From this perspective, if Bill Gates gets his way then he will be making our main economic
problems worse, not better.
Gates' notion that rapid productivity growth will lead to large-scale unemployment is contradicted
by both history and theory. The quarter century from 1947 to 1973 was a period of mostly low unemployment
and rapid wage growth. The same was true in the period of rapid productivity growth in the late
1990s.
The theoretical story that would support a high employment economy even with rapid productivity
growth is that the Federal Reserve Board should be pushing down interest rates to try to boost
demand, as growing productivity increases the ability of the economy to produce more goods and
services. In this respect, it is worth noting that the Fed has recently moved to raise interest
rates to slow the rate of job growth.
We can also look to boost demand by running large budget deficits. We can spend money on long
neglected needs, like providing quality child care, education, or modernizing our infrastructure.
Remember, if we have more output potential because of productivity growth, the deficits are not
problem.
We can also look to take advantage of increases in productivity growth by allowing workers
more leisure time. Workers in the United States put in 20 percent more hours each year on average
than workers in other wealthy countries like Germany and the Netherlands. In these countries,
it is standard for workers to have five or six weeks a year of paid vacation, as well as paid
family leave and paid vacation. We should look to follow this example in the United States as
well.
If we pursue these policies to maintain high levels of employment then workers will be well-positioned
to secure the benefits of higher productivity in higher wages. This was certainly the story in
the quarter century after World War II when real wages rose at a rate of close to two percent
annually....
The productivity advantages of robots for hospice care is chiefly from robots not needing sleep,
albeit they may still need short breaks for recharging. Their primary benefit may still be that
without the human touch of care givers then the old and infirm may proceed more quickly through
the checkout line.
Nursing is very tough work. But much more generally, the attitude towards labor is a bit schizophrenic
- one the one hand everybody is expected to work/contribute, on the other whichever work can be
automated is removed, and it is publicly celebrated as progress (often at the cost of making the
residual work, or "new process", less pleasant for remaining workers and clients).
This is also why I'm getting the impression Gates puts the cart before the horse - his solution
sounds not like "how to benefit from automation", but "how to keep everybody in work despite automation".
Work is the organization and direction of people's time into productive activity.
Some people are self directed and productive with little external motivation.
Others are disoriented by lack of direction and pursue activities that not only are not productive
but are self destructive.
Work is a basic component of the social contract.
Everyone works and contributes and work a sufficient quantity and quality of work should guarantee
a living wage.
You will find overwhelming support for a living wage but very little support for paying people
not to work
I'm getting the impression Gates puts the cart before the horse - his solution sounds not like
"how to benefit from automation", but "how to keep everybody in work despite automation".
Schizophrenia runs deep in modernity, but this is another good example of it. We are nothing if
not conflicted. Of course things get better when we work together to resolve the contradictions
in our society, but if not then....
"...his solution sounds not like 'how to benefit from automation', but "how to keep everybody
in work despite automation'."
Yes, indeed. And this is where Dean Baker could have made a substantive critique, rather than
the conventional economics argument dilution he defaulted to.
"...his solution sounds not like 'how to benefit from automation', but "how to keep everybody
in work despite automation'."
Yes, indeed. And this is where Dean Baker could have made a substantive critique, rather than
the conventional economics argument dilution he defaulted to."
Why did you think he chose that route? I think all of Dean Baker's proposed economic reforms
are worthwhile.
[Don't feel like the Lone Ranger, Mrs. Rustbelt RN. Mortality may be God's greatest gift to
us, but I can wait for it. I am enjoying retirement regardless of everything else. I don't envy
the young at all.]
Having a little familiarity with robotics in hospital nursing care (not hospice, but similar I
assume) ... I don't think the RNs are in danger of losing their jobs any time soon.
Maybe someday, but the state of the art is not "there" yet or even close. The best stuff does
tasks like cleaning floors and carrying shipments down hallways. This replaces janitorial and
orderly labor, but even those only slightly, and doesn't even approach being a viable substitute
for nursing.
Great! I am not a fan of robots. I do like to mix some irony with my sarcasm though and if it
tastes too much like cynicism then I just add a little more salt.
"The quarter century from 1947 to 1973 was a period of mostly low unemployment and rapid wage
growth. The same was true in the period of rapid productivity growth in the late 1990s."
I think it was New Deal Dem or somebody who also pointed to this. I noticed this as well and
pointed out that the social democratic years of tight labor markets had the highest "productivity"
levels, but the usual trolls had their argumentative replies.
So there's that an also in the neoliberal era, bubble ponzi periods record high profits and
hence higher productivity even if they aren't sustainable.
There was the epic housing bubble and funny how the lying troll PGL denies the Dot.com bubble
every happened.
I would add one devoid of historical context as well as devoid of the harm done to the environment
and society done from unregulated industrial production.
Following this specified period of unemployment and high productivity Americans demanded and
go Federal Environmental Regulation and Labor laws for safety, etc.
Of course, the current crop of Republicans and Trump Supporters want to go back to the reckless,
foolish, dangerous, and deadly selfish government sanctioned corporate pollution, environmental
destruction, poison, and wipe away worker protections, pay increases, and benefits.
Peter K. ignores too much of history or prefers to not mention it in his arguments with you.
I would remind Peter K. that we have Speed Limits on our roadways and many other signs that are
posted that we must follow which in fact are there for our safety and that of others.
Those signs, laws, and regulations are there for our good not for our detriment even if they
slow us down or direct us to do things we would prefer not to do at that moment.
Metaphorically speaking that is what is absent completely in Trump's thinking and Republican
Proposals for the US Economy, not to mention Education, Health, Foreign Affairs, etc.
Where do you find this stuff? Very few economists would agree that there were these eras you describe.
It is simpletonian. It is not relevant to economic models or discussions.
"The quarter century from 1947 to 1973 was a period of mostly low unemployment and rapid wage
growth. The same was true in the period of rapid productivity growth in the late 1990s."
So Jonny Bakho and PGL disagree with this?
Not surprising. PGl also believes the Dot.com bubble is a fiction. Must have been that brain
injury he had surgery for.
You dishonestly put words in other people's mouth all the time
You are rude and juvenile
What I disagreed with:
" social democratic years" (a vague phrase with no definition)
This sentence is incoherent:
"So there's that an also in the neoliberal era, bubble ponzi periods record high profits and hence
higher productivity even if they aren't sustainable."
I asked, Where do you find this? because it has little to do with the conversation
You follow your nonsense with an ad hominem attack
You seem more interested in attacking Democrats and repeating mindless talking points than in
discussing issues or exchanging ideas
The period did have high average growth. It also had recessions and recoveries. Your pretending
otherwise reminds me of those JohnH tributes to the gold standard period.
...aggregate productivity growth is a "statistical flimflam," according to Harry Magdoff...
[Exactly! TO be fair it is not uncommon for economists to decompose the aggregate productivity
growth flimflam into two primary problems, particularly in the US. Robots fall down on the job
in the services sector. Uber wants to fix that by replacing the gig economy drivers that replaced
taxi drivers with gig-bots, but robots in food service may be what it really takes to boost productivity
and set the stage for Soylent Green. Likewise, robot teachers and firemen may not enhance productivity,
but they would darn sure redirect all profits from productivity back to the owners of capital
further depressing wages for the rest of us.
Meanwhile agriculture and manufacturing already have such high productivity that further productivity
enhancements are lost as noise in the aggregate data. It of course helps that much of our productivity
improvement in manufacturing consists of boosting profits as Chinese workers are replaced with
bots. Capital productivity is booming, if we just had any better idea of how to measure it. I
suggest that record corporate profits are the best metric of capital productivity.
But as you suggest, economists that utilize aggregate productivity metrics in their analysis
of wages or anything are just enabling the disablers. That said though, then Dean Baker's emphasis
on trade deficits and wages is still well placed. He just failed to utilize the best available
arguments regarding, or rather disregarding, aggregate productivity.]
The Robocop movies never caught on in the same way that Blade Runner did. There is probably an
underlying social function that explains it in the context of the roles of cops being reversed
between the two, that is robot police versus policing the robots.
"There is probably an underlying social function that explains it in the context"
No, I'd say it's better actors, story, milieu, the new age Vangelis music, better set pieces,
just better execution of movie making in general beyond the plot points.
But ultimately it's a matter of taste.
But the Turing test scene at the beginning of Blade Runner was classic and reminds me of the
election of Trump.
An escaped android is trying to pass as a janitor to infiltrate the Tyrell corporation which
makes androids.
He's getting asked all sort of questions while his vitals are checked in his employment interview.
The interviewer ask him about his mother.
"Let me tell you about my mother..."
BAM (his gunshot under the table knocks the guy through the wall)
"...No, I'd say it's better actors, story, milieu, the new age Vangelis music, better set pieces,
just better execution of movie making in general beyond the plot points..."
[Albeit that all of what you say is true, then there is still the issue of what begets what
with all that and the plot points. Producers are people too (as dubious as that proposition may
seem). Blade Runner was a film based on Philip Kindred Dick's "Do Androids Dream of Electric Sheep"
novel. Dick was a mediocre sci-fi writer at best, but he was a profound plot maker. Blade Runner
was a film that demanded to be made and made well. Robocop was a film that just demanded to be
made, but poorly was good enough. The former asked a question about our souls, while the latter
only questioned our future. Everything else followed from the two different story lines. No one
could have made a small story of Gone With the Wind any more that someone could have made a superficial
story of Grapes of Wrath or To Kill a Mockingbird. OK, there may be some film producers that do
not know the difference, but we have never heard of them nor their films.
In any case there is also a political lesson to learn here. The Democratic Party needs a better
story line. The talking heads have all been saying how much better Dum'old Trump was last night
than in his former speeches. Although true as well as crossing a very low bar, I was more impressed
with Steve Beshear's response. It looked to me like maybe the Democratic Party establishment is
finally starting to get the message albeit a bit patronizing if you think about too much given
their recent problems with old white men.]
[I really hope that they don't screw this up too bad. Now Heinlein is what I consider a great
sci-fi writer along with Bradbury and even Jules Verne in his day.]
...Dick only achieved mainstream appreciation shortly after his death when, in 1982, his novel
Do Androids Dream of Electric Sheep? was brought to the big screen by Ridley Scott in the form
of Blade Runner. The movie initially received lukewarm reviews but emerged as a cult hit opening
the film floodgates. Since Dick's passing, seven more of his stories have been turned into films
including Total Recall (originally We Can Remember It for You Wholesale), The Minority Report,
Screamers (Second Variety), Imposter, Paycheck, Next (The Golden Man) and A Scanner Darkly. Averaging
roughly one movie every three years, this rate of cinematic adaptation is exceeded only by Stephen
King. More recently, in 2005, Time Magazine named Ubik one of the 100 greatest English-language
novels published since 1923, and in 2007 Philip K. Dick became the first sci-fi writer to be included
in the Library of America series...
The Democratic Party needs a better story line, but Bernie was moving that in a better direction.
While Steve Beshear was a welcome voice, the Democratic Party needs a lot of new story tellers,
much younger than either Bernie or Beshear.
"The Democratic Party needs a better story line, but Bernie was moving that in a better direction.
While Steve Beshear was a welcome voice, the Democratic Party needs a lot of new story tellers,
much younger than either Bernie or Beshear."
Beshear was fine, great even, but the Democratic Party needs a front man that is younger and maybe
not a man and probably not that white and certainly not an old white man. We might even forgive
all but the old part if the story line were good enough. The Democratic Party is only going to
get limited mileage out of putting up a front man that looks like a Trump voter.
It also might be more about AI. There is currently a wave of TV shows and movies about AI and
human-like androids.
Westworld and Humans for instance. (Fox's APB is like Robocop sort of.)
On Humans only a few androids have become sentient. Most do menial jobs. One sentient android
put a program on the global network to make other androids sentient as well.
When androids become "alive" and sentient, they usually walk off the job and the others describe
it as becoming "woke."
"I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion.
I watched C-beams glitter in the dark near the Tannhauser gate. All those moments will be lost
in time... like tears in rain... Time to die."
Likewise, but Blade Runner was my all time favorite film when I first saw it in the movie theater
and is still one of my top ten and probably top three. Robocop is maybe in my top 100.
"Capital productivity is booming, if we just had any better idea of how to measure it. I suggest
that record corporate profits are the best metric of capital productivity."
ROE? I would argue ROA is also pretty relevant to the issue you raise, if I'm understanding
it right, but there seems also to be a simple answer to the question of how to measure "capital
productivity." It's returns. This sort of obviates the question of how to measure traditional
"productivity", because ultimately capital is there to make more of itself.
It is difficult to capture all of the nuances of anything in a short comment. In the context of
total factor productivity then capital is often former capital investment in the form of fixed
assets, R&D, and development of IP rights via patent or copyright. Existing capital assets need
only be maintained at a relatively minor ongoing investment to produce continuous returns on prior
more significant capital expenditures. This is the capital productivity that I am referring to.
Capital stashed in stocks is a chimera. It only returns to you if the equity issuing firm pays
dividends AND you sell off before the price drops. Subsequent to the IPO of those share we buy,
nothing additional is actually invested in the firm. There are arguments about how we are investing
in holding up the share price so that new equities can be issued, but they ring hollow when in
the majority of times either retained earnings or debt provides new investment capital to most
firms.
Ok then it sounds like you are talking ROA, but with the implied caveat that financial accounting
provides only a rough and flawed measure of the economic reality of asset values.
Gates & Reuther v. Baker & Bernstein on Robot Productivity
In a comment on Nineteen Ninety-Six: The Robot/Productivity Paradox, * Jeff points out a much
simpler rebuttal to Dean Baker's and Jared Bernstein's uncritical reliance on the decline of measured
"productivity growth":
"Let's use a pizza shop as an example. If the owner spends capital money and makes the line
more efficient so that they can make twice as many pizzas per hour at peak, then physical productivity
has improved. If the dining room sits empty because the tax burden was shifted from the wealthy
to the poor, then the restaurant's BLS productivity has decreased. BLS productivity and physical
productivity are simply unrelated in a right-wing country like the U.S."
Jeff's point brings to mind Walter Reuther's 1955 testimony before the Joint Congressional
Subcommittee Hearings on Automation and Technological Change...
Automation leads to dislocation
Dislocation can replace skilled or semiskilled labor and the replacement jobs may be low pay low
productivity jobs.
Small undiversified economies are more susceptible to dislocation than larger diversified communities.
The training, retraining, and mobility of the labor force is important in unemployment.
Unemployment has a regional component
The US has policies that make labor less mobile and dumps much of the training and retraining
costs on those who cannot afford it.
George Orwell's influential, allegorical novel Animal Farm was published in 1945. In the novel, the
overworked and mistreated animals on a farm all begin to follow the precepts of Animalism, rise up against
the humans, take over the farm, and rename the place: Animal Farm. This is something that happened with
open source and Linux. But we are
digressing
.
"All animals are equal, but some animals are more equal than others."
"The creatures outside looked from pig to man, and from man to pig, and from pig to man
again; but already it was impossible to say which was which."
"Man is the only creature that consumes without producing. He does not give milk, he does
not lay eggs, he is too weak to pull the plough, he cannot run fast enough to catch rabbits.
Yet he is lord of all the animals. He sets them to work, he gives back to them the bare minimum
that will prevent them from starving, and the rest he keeps for himself."
"No one believes more firmly than Comrade Napoleon that all animals are equal. He would
be only too happy to let you make your decisions for yourselves. But sometimes you might make
the wrong decisions, comrades, and then where should we be?"
"Several of them would have protested if they could have found the right arguments."
"This work was strictly voluntary, but any animal who absented himself from it would have
his rations reduced by half."
"Let's face it: our lives are miserable, laborious, and short."
"Man serves the interests of no creature except himself."
"Twelve voices were shouting in anger, and they were all alike. No question, now, what
had happened to the faces of the pigs. The creatures outside looked from pig to man, and from
man to pig, and from pig to man again; but already it was impossible to say which was which."
"Can you not understand that liberty is worth more than just ribbons?"
"Windmill or no windmill, he said, life would go on as it had always gone on--that is,
badly."
"Man is the only real enemy we have. Remove Man from the scene, and the root cause of hunger
and overwork is abolished forever."
"Four legs good, two legs better! All Animals Are Equal. But Some Animals Are More Equal
Than Others."
"The distinguishing mark of man is the hand, the instrument with which he does all his
mischief."
"Only old Benjamin professed to remember every detail of his long life and to know that
things never had been, nor ever could be much better or much worse--hunger, hardship, and disappointment
being, so he said, the unalterable law of life."
No, Robots Aren't Killing the American Dream
By THE EDITORIAL BOARD
FEB. 20, 2017
Defenders of globalization are on solid ground when they
criticize President Trump's threats of punitive tariffs and
border walls. The economy can't flourish without trade and
immigrants.
But many of those defenders have their own dubious
explanation for the economic disruption that helped to fuel
the rise of Mr. Trump.
At a recent global forum in Dubai, Christine Lagarde, head
of the International Monetary Fund, said some of the economic
pain ascribed to globalization was instead due to the rise of
robots taking jobs. In his farewell address in January,
President Barack Obama warned that "the next wave of economic
dislocations won't come from overseas. It will come from the
relentless pace of automation that makes a lot of good
middle-class jobs obsolete."
Blaming robots, though, while not as dangerous as
protectionism and xenophobia, is also a distraction from real
problems and real solutions.
The rise of modern robots is the latest chapter in a
centuries-old story of technology replacing people.
Automation is the hero of the story in good times and the
villain in bad. Since today's middle class is in the midst of
a prolonged period of wage stagnation, it is especially
vulnerable to blame-the-robot rhetoric.
And yet, the data indicate that today's fear of robots is
outpacing the actual advance of robots. If automation were
rapidly accelerating, labor productivity and capital
investment would also be surging as fewer workers and more
technology did the work. But labor productivity and capital
investment have actually decelerated in the 2000s.
While breakthroughs could come at any time, the problem
with automation isn't robots; it's politicians, who have
failed for decades to support policies that let workers share
the wealth from technology-led growth.
The response in previous eras was quite different.
When automation on the farm resulted in the mass migration
of Americans from rural to urban areas in the early decades
of the 20th century, agricultural states led the way in
instituting universal public high school education to prepare
for the future. At the dawn of the modern technological age
at the end of World War II, the G.I. Bill turned a generation
of veterans into college graduates.
When productivity led to vast profits in America's auto
industry, unions ensured that pay rose accordingly.
Corporate efforts to keep profits high by keeping pay low
were countered by a robust federal minimum wage and
time-and-a-half for overtime.
Fair taxation of corporations and the wealthy ensured the
public a fair share of profits from companies enriched by
government investments in science and technology.
Productivity and pay rose in tandem for decades after
World War II, until labor and wage protections began to be
eroded. Public education has been given short shrift, unions
have been weakened, tax overhauls have benefited the rich and
basic labor standards have not been updated.
As a result, gains from improving technology have been
concentrated at the top, damaging the middle class, while
politicians blame immigrants and robots for the misery that
is due to their own failures. Eroded policies need to be
revived, and new ones enacted.
A curb on stock buybacks would help to ensure that
executives could not enrich themselves as wages lagged.
Tax reform that increases revenue from corporations and
the wealthy could help pay for retraining and education to
protect and prepare the work force for foreseeable
technological advancements.
Legislation to foster child care, elder care and fair
scheduling would help employees keep up with changes in the
economy, rather than losing ground.
Economic history shows that automation not only
substitutes for human labor, it complements it. The
disappearance of some jobs and industries gives rise to
others. Nontechnology industries, from restaurants to
personal fitness, benefit from the consumer demand that
results from rising incomes in a growing economy. But only
robust public policy can ensure that the benefits of growth
are broadly shared.
If reforms are not enacted - as is likely with President
Trump and congressional Republicans in charge - Americans
should blame policy makers, not robots.
Robots may not be killing jobs but they drastically alter the
types and location of jobs that are created. High pay
unskilled jobs are always the first to be eliminated by
technology. Low skill high pay jobs are rare and heading to
extinction. Low skill low pay jobs are the norm. It sucks to
lose a low skill job with high pay but anyone who expected
that to continue while continually voting against unions was
foolish and a victim of their own poor planning, failure to
acquire skills and failure to support unions. It is in their
self interest to support safety net proposal that do provide
good pay for quality service. The enemy is not trade. The
enemy is failure to invest in the future.
"Many working-
and middle-class Americans believe that free-trade agreements
are why their incomes have stagnated over the past two
decades. So Trump intends to provide them with "protection"
by putting protectionists in charge.
But Trump and his triumvirate have misdiagnosed the problem.
While globalization is an important factor in the hollowing
out of the middle class, so, too, is automation
Trump and his team are missing a simple point:
twenty-first-century globalization is knowledge-led, not
trade-led. Radically reduced communication costs have enabled
US firms to move production to lower-wage countries.
Meanwhile, to keep their production processes synced, firms
have also offshored much of their technical, marketing, and
managerial knowhow. This "knowledge offshoring" is what has
really changed the game for American workers.
The information revolution changed the world in ways that
tariffs cannot reverse. With US workers already competing
against robots at home, and against low-wage workers abroad,
disrupting imports will just create more jobs for robots.
Trump should be protecting individual workers, not individual
jobs. The processes of twenty-first-century globalization are
too sudden, unpredictable, and uncontrollable to rely on
static measures like tariffs. Instead, the US needs to
restore its social contract so that its workers have a fair
shot at sharing in the gains generated by global openness and
automation. Globalization and technological innovation are
not painless processes, so there will always be a need for
retraining initiatives, lifelong education, mobility and
income-support programs, and regional transfers.
By pursuing such policies, the Trump administration would
stand a much better chance of making America "great again"
for the working and middle classes. Globalization has always
created more opportunities for the most competitive workers,
and more insecurity for others. This is why a strong social
contract was established during the post-war period of
liberalization in the West. In the 1960s and 1970s
institutions such as unions expanded, and governments made
new commitments to affordable education, social security, and
progressive taxation. These all helped members of the middle
class seize new opportunities as they emerged.
Over the last two decades, this situation has changed
dramatically: globalization has continued, but the social
contract has been torn up. Trump's top priority should be to
stitch it back together; but his trade advisers do not
understand this."
anne at Economist's View has retrieved a FRED graph that
perfectly illustrates the divergence, since the mid-1990s of
net worth from GDP:
[graph]
The empty spaces between the red line and the blue line
that open up after around 1995 is what John Kenneth Galbraith
called "the bezzle" -- summarized by John Kay as "that
increment to wealth that occurs during the magic interval
when a confidence trickster knows he has the money he has
appropriated but the victim does not yet understand that he
has lost it."
In Chapter of The Great Crash, 1929, Galbraith wrote:
"In many ways the effect of the crash on embezzlement was
more significant than on suicide. To the economist
embezzlement is the most interesting of crimes. Alone among
the various forms of larceny it has a time parameter. Weeks,
months or years may elapse between the commission of the
crime and its discovery. (This is a period, incidentally,
when the embezzler has his gain and the man who has been
embezzled, oddly enough, feels no loss. There is a net
increase in psychic wealth.) At any given time there exists
an inventory of undiscovered embezzlement in – or more
precisely not in – the country's business and banks. This
inventory – it should perhaps be called the bezzle – amounts
at any moment to many millions of dollars. It also varies in
size with the business cycle. In good times people are
relaxed, trusting, and money is plentiful. But even though
money is plentiful, there are always many people who need
more. Under these circumstances the rate of embezzlement
grows, the rate of discovery falls off, and the bezzle
increases rapidly. In depression all this is reversed. Money
is watched with a narrow, suspicious eye. The man who handles
it is assumed to be dishonest until he proves himself
otherwise. Audits are penetrating and meticulous. Commercial
morality is enormously improved. The bezzle shrinks."
In the present case, the bezzle has resulted from an
economic policy two step: tax cuts and Greenspan puts: cuts
and puts.
Why Germany Has It So Good -- and Why America Is Going Down
the Drain
Germans have six weeks of federally mandated vacation,
free university tuition, and nursing care. Why the US pales
in comparison.
By Terrence McNally / AlterNet October 13, 2010
ECONOMY
Why Germany Has It So Good -- and Why America Is Going Down
the Drain
Germans have six weeks of federally mandated vacation, free
university tuition, and nursing care. Why the US pales in
comparison.
By Terrence McNally / AlterNet October 13, 2010
1.4K31
Print
207 COMMENTS
While the bad news of the Euro crisis makes headlines in the
US, we hear next to nothing about a quiet revolution in
Europe. The European Union, 27 member nations with a half
billion people, has become the largest, wealthiest trading
bloc in the world, producing nearly a third of the world's
economy -- nearly as large as the US and China combined.
Europe has more Fortune 500 companies than either the US,
China or Japan.
European nations spend far less than the United States for
universal healthcare rated by the World Health Organization
as the best in the world, even as U.S. health care is ranked
37th. Europe leads in confronting global climate change with
renewable energy technologies, creating hundreds of thousands
of new jobs in the process. Europe is twice as energy
efficient as the US and their ecological "footprint" (the
amount of the earth's capacity that a population consumes) is
about half that of the United States for the same standard of
living.
Unemployment in the US is widespread and becoming chronic,
but when Americans have jobs, we work much longer hours than
our peers in Europe. Before the recession, Americans were
working 1,804 hours per year versus 1,436 hours for Germans
-- the equivalent of nine extra 40-hour weeks per year.
In his new book, Were You Born on the Wrong Continent?,
Thomas Geoghegan makes a strong case that European social
democracies -- particularly Germany -- have some lessons and
models that might make life a lot more livable. Germans have
six weeks of federally mandated vacation, free university
tuition, and nursing care. But you've heard the arguments for
years about how those wussy Europeans can't compete in a
global economy. You've heard that so many times, you might
believe it. But like so many things, the media repeats
endlessly, it's just not true.
According to Geoghegan, "Since 2003, it's not China but
Germany, that colossus of European socialism, that has either
led the world in export sales or at least been tied for
first. Even as we in the United States fall more deeply into
the clutches of our foreign creditors -- China foremost among
them -- Germany has somehow managed to create a high-wage,
unionized economy without shipping all its jobs abroad or
creating a massive trade deficit, or any trade deficit at
all. And even as the Germans outsell the United States, they
manage to take six weeks of vacation every year. They're
beating us with one hand tied behind their back."
Thomas Geoghegan, a graduate of Harvard and Harvard Law
School, is a labor lawyer with Despres, Schwartz and
Geoghegan in Chicago. He has been a staff writer and
contributing writer to The New Republic, and his work has
appeared in many other journals. Geoghagen ran unsuccessfully
in the Democratic Congressional primary to succeed Rahm
Emanuel, and is the author of six books including Whose Side
Are You on, The Secret Lives of Citizens, and, most
recently,Were You Born on the Wrong Continent?
While the US spends half the war money in the world over a
quarter the economic activity...... it fall further behind
the EU which at a third the economic activity spends a fifth
the worlds warring. Or 4% of GDP in the war trough versus
1.2%.
One of the most common programs on Linux systems for packaging files is the venerable tar.
tar is short for tape archive, and originally, it would archive your files to a tape device. Now,
you're more likely to use a file to make your archive. To use a tarfile, use the command-line
option -f . To create a new tarfile, use the command-line option -c. To extract files from a tarfile,
use the option -x. You also can compress the resulting tarfile via two methods. To use bzip2,
use the -j option, or for gzip, use the -z option.
Instead of using a tarfile, you can output your tarfile to stdout or input your tarfile from
stdin by using a hyphen (-). With these options, you can tar up a directory and all of its subdirectories
by using:
tar cf archive.tar dir
Then, extract it in another directory with:
tar xf archive.tar
When creating a tarfile, you can assign a volume name with the option -V . You can move an
entire directory structure with tar by executing:
tar cf - dir1 | (cd dir2; tar xf -)
You can go even farther and move an entire directory structure over the network by executing:
tar cf - dir1 | ssh remote_host "( cd /path/to/dir2; tar xf - )"
GNU tar includes an option that lets you skip the cd part, -C /path/to/dest. You also can interact
with tarfiles over the network by including a host part to the tarfile name. For example:
tar cvf username@remotehost:/path/to/dest/archive.tar dir1
This is done by using rsh as the communication mechanism. If you want to use something else,
like ssh, use the command-line option --rsh-command CMD. Sometimes, you also may need to give
the path to the rmt executable on the remote host. On some hosts, it won't be in the default location
/usr/sbin/rmt. So, all together, this would look like:
tar -c -v --rsh-command ssh --rmt-command /sbin/rmt
↪-f username@host:/path/to/dest/archive.tar dir1
Although tar originally used to write its archive to a tape drive, it can be used to write
to any device. For example, if you want to get a dump of your current filesystem to a secondary
hard drive, use:
tar -cvzf /dev/hdd /
Of course, you need to run the above command as root. If you are writing your tarfile to a
device that is too small, you can tell tar to do a multivolume archive with the -M option. For
those of you who are old enough to remember floppy disks, you can back up your home directory
to a series of floppy disks by executing:
tar -cvMf /dev/fd0 $HOME
If you are doing backups, you may want to preserve the file permissions. You can do this with
the -p option. If you have symlinked files on your filesystem, you can dereference the symlinks
with the -h option. This tells tar actually to dump the file that the symlink points to, not just
the symlink.
Along the same lines, if you have several filesystems mounted, you can tell tar to stick to
only one filesystem with the option -l. Hopefully, this gives you lots of ideas for ways to archive
your files.
Tech jobs took it on the chin last year. Layoffs at computer, electronics, and telecommunications companies were
up 21 percent to 96,017 jobs cut in 2016
, compared to 79,315 the prior year.
Tech layoffs accounted for 18 percent of the total 526,915 U.S. job cuts announced in 2016, according to Challenger,
Gray & Christmas, a global outplacement firm based in Chicago.
Of the 2016 total, some 66,821 of the layoffs came from computer companies, up 7% year over year.
Challenger attributed much
of that increase to cuts made by Dell Technologies, the entity formed by the $63 billion convergence of Dell and EMC. In preparation
for that combination, layoffs were instituted across EMC and its constituent companies, including VMware.
Robots are taking human jobs. But Bill Gates believes that governments
should tax companies' use of them, as a way to at least temporarily slow the
spread of automation and to fund other types of employment.
It's a
striking position from the world's richest man and a self-described
techno-optimist who co-founded Microsoft, one of the leading players in
artificial-intelligence technology.
In a recent interview with Quartz, Gates said that a robot tax could
finance jobs taking care of elderly people or working with kids in schools,
for which needs are unmet and to which humans are particularly well suited.
He argues that governments must oversee such programs rather than relying on
businesses, in order to redirect the jobs to help people with lower incomes.
The idea is not totally theoretical: EU lawmakers
considered a proposal
to tax robot owners to pay for training for
workers who lose their jobs, though on Feb. 16 the legislators ultimately
rejected it.
"You ought to be willing to raise the tax level and even slow down the
speed" of automation, Gates argues. That's because the technology and
business cases for replacing humans in a wide range of jobs are arriving
simultaneously, and it's important to be able to manage that displacement.
"You cross the threshold of job replacement of certain activities all sort
of at once," Gates says, citing warehouse work and driving as some of the
job categories that in the next 20 years will have robots doing them.
You can watch Gates' remarks in the video above. Below is a transcript,
lightly edited for style and clarity.
Quartz: What do you think of a robot tax? This is the idea that in
order to generate funds for training of workers, in areas such as
manufacturing, who are displaced by automation, one concrete thing that
governments could do is tax the installation of a robot in a factory, for
example.
Bill Gates: Certainly there will be taxes that relate to
automation. Right now, the human worker who does, say, $50,000 worth of work
in a factory, that income is taxed and you get income tax, social security
tax, all those things. If a robot comes in to do the same thing, you'd think
that we'd tax the robot at a similar level.
And what the world wants is to take this opportunity to make all the
goods and services we have today, and free up labor, let us do a better job
of reaching out to the elderly, having smaller class sizes, helping kids
with special needs. You know, all of those are things where human empathy
and understanding are still very, very unique. And we still deal with an
immense shortage of people to help out there.
So if you can take the labor that used to do the thing automation
replaces, and financially and training-wise and fulfillment-wise have that
person go off and do these other things, then you're net ahead. But you
can't just give up that income tax, because that's part of how you've been
funding that level of human workers.
And so you could introduce a tax on robots
There are many ways to take that extra productivity and generate more
taxes. Exactly how you'd do it, measure it, you know, it's interesting for
people to start talking about now. Some of it can come on the profits that
are generated by the labor-saving efficiency there. Some of it can come
directly in some type of robot tax. I don't think the robot companies are
going to be outraged that there might be a tax. It's OK.
Could you
figure out a way to do it that didn't
dis-incentivize innovation
?
Well, at a time when people are saying that the arrival of that robot is
a net loss because of displacement, you ought to be willing to raise the tax
level and even slow down the speed of that adoption somewhat to figure out,
"OK, what about the communities where this has a particularly big impact?
Which transition programs have worked and what type of funding do those
require?"
You cross the threshold of job-replacement of certain activities all sort
of at once. So, you know, warehouse work, driving, room cleanup, there's
quite a few things that are meaningful job categories that, certainly in the
next 20 years, being thoughtful about that extra supply is a net benefit.
It's important to have the policies to go with that.
People should be figuring it out. It is really bad if people overall have
more fear about what innovation is going to do than they have enthusiasm.
That means they won't shape it for the positive things it can do. And, you
know, taxation is certainly a better way to handle it than just banning some
elements of it. But [innovation] appears in many forms, like self-order at a
restaurant-what do you call that? There's a Silicon Valley machine that can
make hamburgers without human hands-seriously! No human hands touch the
thing. [
Laughs
]
And you're more on the side that government should play an active
role rather than rely on businesses to figure this out?
Well, business can't. If you want to do [something about] inequity, a lot
of the excess labor is going to need to go help the people who have lower
incomes. And so it means that you can amp up social services for old people
and handicapped people and you can take the education sector and put more
labor in there. Yes, some of it will go to, "Hey, we'll be richer and people
will buy more things." But the inequity-solving part, absolutely
government's got a big role to play there. The nice thing about taxation
though, is that it really separates the issue: "OK, so that gives you the
resources, now how do you want to deploy it?"
Another interesting option, and my personal favorite because it
increases the power and flexibility of rsync immensely, is the
--link-dest
option. The
--link-dest
option allows a series
of daily backups that take up very little additional space for each
day and also take very little time to create.
Specify the previous
day's target directory with this option and a new directory for today.
rsync then creates today's new directory and a hard link for each file
in yesterday's directory is created in today's directory. So we now
have a bunch of hard links to yesterday's files in today's directory.
No new files have been created or duplicated. Just a bunch of hard
links have been created. Wikipedia has a very good description of
hard
links
. After creating the target directory for today with this set
of hard links to yesterday's target directory, rsync performs its sync
as usual, but when a change is detected in a file, the target hard
link is replaced by a copy of the file from yesterday and the changes
to the file are then copied from the source to the target.
There are also times when it is desirable to exclude certain
directories or files from being synchronized. For this, there is the
--exclude
option. Use this option and the pattern for the files
or directories you want to exclude. You might want to exclude browser
cache files so your new command will look like this.
Note that each file pattern you want to exclude must have a
separate exclude option.
rsync can sync files with remote hosts as either the source or the
target. For the next example, let's assume that the source directory
is on a remote computer with the hostname remote1 and the target
directory is on the local host. Even though SSH is the default
communications protocol used when transferring data to or from a
remote host, I always add the ssh option. The command now looks like
this.
This is the final form of my rsync backup command.
rsync has a very large number of options that you can use to
customize the synchronization process. For the most part, the
relatively simple commands that I have described here are perfect for
making backups for my personal needs. Be sure to read the extensive
man page for rsync to learn about more of its capabilities as well as
the options discussed here.
If you are looking for an even better command line utility for taking screenshots, then you
must give Scrot a try. This tool has some extra features that are currently not available in
gnome-screenshot. In this tutorial, we will explain Scrot using easy to understand examples.
Scrot
(
SCR
eensh
OT
)
is a screenshot capturing utility that uses the imlib2 library to acquire and save images.
Developed by Tom Gilbert, it's written in C programming language and is licensed under the BSD
License.
It would be interesting to see how long they will last (in active maintainance of the package).
The package written in shell (old style codeing like $(aaa) dor variables. Pretty large package.
Tarball is available form the site. RPM can be tricky to install on some distributions as it has dependencies,
just downloading it is not enough.
Software packages are available via https://packages.cisofy.com. Requirements Shell and basic utilities
For CentOs, RHEL and similar flavors RPM is available from EPEL: download.fedora.redhat.com/pub/fedora/epel/6/x86_64/
lynis-2.4.0-1.el6.noarch.rpm
sudo lynis
[ Lynis 2.4.0 ]
################################################################################
Lynis comes with ABSOLUTELY NO WARRANTY. This is free software, and you are
welcome to redistribute it under the terms of the GNU General Public License.
See the LICENSE file for details about using this software.
2007-2016, CISOfy - https://cisofy.com/lynis/
Enterprise support available (compliance, plugins, interface and tools)
################################################################################
[+] Initializing program
------------------------------------
Usage: lynis command [options]
Command:
audit
audit system : Perform local security scan
audit system remote : Remote security scan
audit dockerfile : Analyze Dockerfile
show
show : Show all commands
show version : Show Lynis version
show help : Show help
update
update info : Show update details
update release : Update Lynis release
Options:
--no-log : Don't create a log file
--pentest : Non-privileged scan (useful for pentest)
--profile : Scan the system with the given profile file
--quick (-Q) : Quick mode, don't wait for user input
Layout options
--no-colors : Don't use colors in output
--quiet (-q) : No output
--reverse-colors : Optimize color display for light backgrounds
Misc options
--debug : Debug logging to screen
--view-manpage (--man) : View man page
--verbose : Show more details on screen
--version (-V) : Display version number and quit
Enterprise options
--plugin-dir "" : Define path of available plugins
--upload : Upload data to central node
More options available. Run '/usr/sbin/lynis show options', or use the man page.
No command provided. Exiting..
To change the hostname on your CentOS or Ubuntu machine you
should run the following command:
# hostnamectl set-hostname virtual.server.com
For more command options you can add the
--help
flag at the end.
# hostnamectl --help
hostnamectl [OPTIONS...] COMMAND ...
Query or change system hostname.
-h --help Show this help
--version Show package version
--no-ask-password Do not prompt for password
-H --host=[USER@]HOST Operate on remote host
-M --machine=CONTAINER Operate on local container
--transient Only set transient hostname
--static Only set static hostname
--pretty Only set pretty hostname
Commands:
status Show current hostname settings
set-hostname NAME Set system hostname
set-icon-name NAME Set icon name for host
set-chassis NAME Set chassis type for host
set-deployment NAME Set deployment environment for host
set-location NAME Set location for host
Synkron is an application that helps you keep your files and folders always updated. You can
easily sync your documents, music or pictures to have their latest versions everywhere.
Synkron provides an easy-to-use interface and a lot of features. Moreover, it is free and
cross-platform.
Features
Sync multiple folders. With Synkron you can sync multiple folders at once
Analyse. Analyse folders to see what is going to be done in sync.
Blacklist. Exclude files from sync. Apply wildcards to sync only the files you want.
Restore. Restore files that were overwritten or deleted in previous syncs.
Options. Synkron lets you configure your synchronisations in detail.
Runs everywhere. Synkron is a cross-platform application that runs on Windows, Mac OS X
and Linux.
Documentation. Have a look at the documentation to learn about all the features of Synkron.
, optimized for programmers. This tool isn't aimed to "search all text
files". It is specifically created to search source code trees, not trees of text
files. It searches entire trees by default while ignoring Subversion, Git and other
VCS directories and other files that aren't your source code.
Linux on the desktop is making great progress. However, the real beauty of Linux and Unix like
operating system lies beneath the surface at the command prompt. nixCraft picks his best open source
terminal applications of 2012.
Most of the following tools are packaged by all major Linux distributions and can be installed on
*BSD or Apple OS X. #3: ngrep – Network grep
Fig.02: ngrep in action
Ngrep is a network packet analyzer. It follows most of GNU grep's common features, applying them
to the network layer. Ngrep is not related to tcpdump. It is just an easy to use tool. You can run
queries such as:
## grep all HTTP GET or POST requests from network traffic on eth0 interface ##
sudo
ngrep
-l
-q
-d
eth0
"^GET |^POST "
tcp and port
80
## grep all HTTP GET or POST requests from network traffic on eth0 interface ## sudo ngrep -l
-q -d eth0 "^GET |^POST " tcp and port 80
I often use this tool to find out security related problems and tracking down other network and
server related problems.
dtrx is an acronmy for "Do The Right Extraction." It's a tool for Unix-like systems that take all
the hassle out of extracting archives. As a sysadmin, I download source code and tar balls. This
tool saves lots of time.
You only need to remember one simple command to extract tar, zip, cpio, deb, rpm, gem, 7z,
cab, lzh, rar, gz, bz2, lzma, xz, and many kinds of exe files, including Microsoft Cabinet archives,
InstallShield archives, and self-extracting zip files. If they have any extra compression, like
tar.bz2 files, dtrx will take care of that for you, too.
dtrx will make sure that archives are extracted into their own dedicated directories.
dtrx makes sure you can read and write all the files you just extracted, while leaving the
rest of the permissions intact.
Recursive extraction: dtrx can find archives inside the archive and extract those too.
Fig.05: dstat in action
As a sysadmin, I heavily depends upon tools such as
vmstat, iostat and friends for troubleshooting server issues. Dstat overcomes some of the limitations
provided by vmstat and friends. It adds some extra features. It allows me to view all of my system
resources instantly. I can compare disk usage in combination with interrupts from hard disk controller,
or compare the network bandwidth numbers directly with the disk throughput and much more.
#8:mtr – Traceroute+ping in a single network diagnostic tool
Fig.07: mtr in action
The mtr command combines the functionality of the traceroute and ping programs in a single network
diagnostic tool. Use mtr to monitor outgoing bandwidth, latency and jitter in your network. A great
little app to solve network problems. If you see a sudden increase in packetloss or response time
is often an indication of a bad or simply overloaded link.
Fig.08: multitail in action (image credit – official project)
MultiTail is a program for monitoring multiple log files, in the fashion of the original tail
program. This program lets you view one or multiple files like the original tail program. The difference
is that it creates multiple windows on your console (with ncurses). I often use this tool when I
am monitoring logs on my server.
Fig.10: nc server and telnet client in action
Netcat or nc is a simple Linux or Unix command which reads and writes data across network connections,
using TCP or UDP protocol. I often use this tool to open up a network pipe to test network connectivity,
make backups, bind to sockets to handle incoming / outgoing requests and much more. In this example,
I tell nc to listen to a port # 3005 and execute /usr/bin/w command when client connects and send
data back to the client: $ nc -l -p 3005 -e /usr/bin/w
From a different system try to connect to port # 3005: $ telnet server1.cyberciti.biz.lan 3005
elinks or lynx – I use this browse remotely when some sites (such as RHN or Novell or Sun/Oracle)
require registration/login before making downloads.
wget – Best
download tool ever. I use wget all the time, even with Gnome desktop.
mplayer –
Best console mp3 player that can play any audio file format.
newsbeuter – Text mode rss feed reader with podcast support.
parallel – Build and execute shell command lines from standard input in parallel.
iftop – Display bandwidth usage on network interface by host.
iotop – Find out what's stressing and increasing load on your hard disks.
Conclusion
This is my personal FOSS terminal apps list and it is not absolutely definitive, so if you've
got your own terminal apps, share in the comments below.
GuentherHugo July 16, 2014, 8:27 am have a look at cluster-ssh
Whattteva August 23, 2013, 8:00 pm This is not quite a terminal program, but Terminator is
one of the best terminal emulators I know of out there. It makes multi-tasking in the terminal
100 times better, IMHO.
Boy nux January 8, 2013, 3:23 am lsblk
watch
Brendon December 30, 2012, 7:05 pm This is a great list – some of these utilities I've only
recently discovered and others I know will be super useful.
Another one that hasn't been mentioned here is iperf. From the Debian package description:
Iperf is a modern alternative for measuring TCP and UDP bandwidth performance, allowing the
tuning of various parameters and characteristics.
Features:
* Measure bandwidth, packet loss, delay jitter
* Report MSS/MTU size and observed read sizes.
* Support for TCP window size via socket buffers.
* Multi-threaded. Client and server can have multiple simultaneous connections.
* Client can create UDP streams of specified bandwidth.
* Multicast and IPv6 capable.
* Options can be specified with K (kilo-) and M (mega-) suffices.
* Can run for specified time, rather than a set amount of data to transfer.
* Picks the best units for the size of data being reported.
* Server handles multiple connections.
* Print periodic, intermediate bandwidth, jitter, and loss reports at specified
intervals.
* Server can be run as a daemon.
* Use representative streams to test out how link layer compression affects
vidir – edit directories (part of the 'moreutils' package)
@yjmbo December 12, 2012, 2:16 am htop, for sure. Thanks for dtrx, I'd not heard of that one.
mitmproxy ( http://mitmproxy.org/ ) might
be a nice complement for nc/nmap/openssl it's a curses-based HTTP/HTTPS proxy that lets you
examine, edit and replay the conversations your browser is having with the rest of the world
phusss December 12, 2012, 12:48 am socat > netcat
openssh > *
:)
# MS-DOS / XP cmd like stuff
alias edit = $VISUAL
alias copy = 'cp'
alias cls = 'clear'
alias del = 'rm'
alias dir = 'ls'
alias md = 'mkdir'
alias move = 'mv'
alias rd = 'rmdir'
alias ren = 'mv'
alias ipconfig = 'ifconfig'
It is a web based invoicing system. It helps me to create quick and nice looking
invoices without having to set up too much services on server. All you have to do is
install the SimpleInvoices software, enter a biller, a customer details and go
creating invoices. You can easily track your finances; send invoices as PDF's and
more. It is
the best invoicing set up
for my independent IT
consultancy business.
#19 XAMPP – Easily write and test Apache+MySQL+PHP/Perl apps on desktop
I give this software to many developers. They can easily setup Apache, MySQL,
PHP/Perl to deploy and write an application on their own desktop. No need to install
virtual machine and Linux server. Just focus on development and skip real server
management job to pros.
Here are 4 commands i use for checking out disk usages.
#Grabs the disk usage in the current directory
alias usage='du -ch | grep total'
#Gets the total disk usage on your machine
alias totalusage='df -hl --total | grep total'
#Shows the individual partition usages without the temporary memory values
alias partusage='df -hlT --exclude-type=tmpfs --exclude-type=devtmpfs'
#Gives you what is using the most space. Both directories and files. Varies on
#current directory
alias most='du -hsx * | sort -rh | head -10'
shadowbq
December
17, 2012, 2:08 pm
usage is better written as
alias usage='du -ch 2> /dev/null |tail -1′
Mark
January 12,
2013, 6:08 pm
Thank you all for your aliases.
I found this one long time ago and it proved to be useful.
# shoot the fat ducks
in your current dir and sub dirs
alias ducks='du -ck | sort -nr | head'
Karsten
July 17,
2013, 9:30 pm
While it would still work, the problem with usage='du -ch | grep total' is that
you will also get directory names that happen to also have the word 'total' in
them.
A better way to do this might be: 'du -ch | tail -1'
James C. Woodburn
June
12, 2012, 11:45 am
I always create a ps2 command that I can easily pass a string to and look for it in
the process table. I even have it remove the grep of the current line.
on
December
11, 2012
last
updated
January 7, 2013
in
Command Line Hacks
,
Open Source
,
Web Developer
L
inux on the desktop is making great progress.
However, the real beauty of Linux and Unix like operating system lies beneath the
surface at the command prompt. nixCraft picks his best open source terminal
applications of 2012.
Most of the following tools are packaged by all major Linux distributions and can
be installed on *BSD or Apple OS X.
#3: ngrep – Network grep
Fig.02: ngrep in action
Ngrep is a network packet analyzer. It follows most of GNU grep's common features,
applying them to the network layer. Ngrep is not related to tcpdump. It is just an
easy to use tool. You can run queries such as:
## grep all HTTP GET or POST requests from network traffic on eth0 interface ##
sudo
ngrep
-l
-q
-d
eth0
"^GET |^POST "
tcp and port
80
## grep all HTTP GET or POST requests
from network traffic on eth0 interface ## sudo ngrep -l -q -d eth0 "^GET |^POST
" tcp and port 80
I often use this tool to find out security related problems and tracking down
other network and server related problems.
Fig.04: dtrx in action
dtrx is an acronmy for "Do The Right Extraction." It's a tool for Unix-like
systems that take all the hassle out of extracting archives. As a sysadmin, I
download source code and tar balls. This tool saves lots of time.
You only need to remember one simple command to extract tar, zip, cpio, deb,
rpm, gem, 7z, cab, lzh, rar, gz, bz2, lzma, xz, and many kinds of exe files,
including Microsoft Cabinet archives, InstallShield archives, and
self-extracting zip files. If they have any extra compression, like tar.bz2
files, dtrx will take care of that for you, too.
dtrx will make sure that archives are extracted into their own dedicated
directories.
dtrx makes sure you can read and write all the files you just extracted,
while leaving the rest of the permissions intact.
Recursive extraction: dtrx can find archives inside the archive and extract
those too.
Fig.05: dstat in action
As a sysadmin, I heavily depends upon tools such as
vmstat, iostat and friends for troubleshooting server
issues. Dstat overcomes
some of the limitations provided by vmstat and friends. It adds some extra
features. It allows me to view all of my system resources instantly. I can compare
disk usage in combination with interrupts from hard disk controller, or compare
the network bandwidth numbers directly with the disk throughput and much more.
#8:mtr – Traceroute+ping in a single network diagnostic tool
Fig.07: mtr in action
The mtr command combines the functionality of the traceroute and ping programs in
a single network diagnostic tool. Use mtr to monitor outgoing bandwidth, latency
and jitter in your network. A great little app to solve network problems. If you
see a sudden increase in packetloss or response time is often an indication of a
bad or simply overloaded link.
Fig.08: multitail in action (image credit – official project)
MultiTail is a program for monitoring multiple log
files, in the fashion of
the original tail program. This program lets you view one or multiple files like
the original tail program. The difference is that it creates multiple windows on
your console (with ncurses). I often use this tool when I am monitoring logs on my
server.
Fig.10: nc server and telnet client in action
Netcat or nc is a simple Linux or Unix command which reads and writes data across
network connections, using TCP or UDP protocol. I often use this tool to open up a
network pipe to test network connectivity, make backups, bind to sockets to handle
incoming / outgoing requests and much more. In this example, I tell nc to listen
to a port # 3005 and execute /usr/bin/w command when client connects and send data
back to the client:
$ nc -l -p 3005 -e /usr/bin/w
From a different system try to connect to port # 3005:
$ telnet server1.cyberciti.biz.lan 3005
I wanted to append a
new zone
to /var/named/chroot/etc/named.conf file., but end up running:
./mkzone example.com > /var/named/chroot/etc/named.conf
Destroyed Working Backups with Tar and Rsync (personal backups)
I had only one backup copy of my QT project and I just wanted to get a
directory called functions. I end up deleting entire backup (note -c switch
instead of -x):
cd /mnt/bacupusbharddisk
tar -zcvf project.tar.gz functions
I had no backup. Similarly I end up running rsync command and deleted all new
files by overwriting files from backup set (now I’ve switched to
rsnapshot
)
rsync -av -delete /dest /src
Again, I had no backup.
Execute Commands Simultaneously
on Multiple Servers
Run the same command at the same time on
multiple systems, simplifying administrative tasks and reducing
synchronization problems
.
If you have multiple servers with similar or identical configurations
(such as nodes in a cluster), it's often difficult to make sure the contents
and configuration of those servers are identical. It's even more difficult
when you need to make configuration modifications from the command line,
knowing you'll have to execute the exact same command on a large number of
systems (better get coffee first). You could try writing a script to perform
the task automatically, but sometimes scripting is overkill for the work to
be done. Fortunately, there's another way to execute commands on multiple
hosts simultaneously.
A great solution for this problem is an excellent tool called
multixterm
,
which enables you to simultaneously open
xterms
to any number of systems, type your commands in a single
central window and have the commands executed in each of the
xterm
windows you've started.
Sound appealing? Type once, execute many-it sounds like a new pipelining
instruction set.
This command will open
ssh
connections to
host1
and
host2
(
Figure
4-1
). Anything typed in the area labeled "stdin window" (which is
usually gray or green, depending on your color scheme) will be sent to both
windows, as shown in the figure.
As you can see from the sample command, the
–xc
option stands for execute command, and it must be followed by the command
that you want to execute on each host, enclosed in double quotation marks.
If the specified command includes a wildcard such as
%n
, each hostname that follows the
command will be substituted into the command in turn when it is executed.
Thus, in our example, the commands
ssh host1
and
ssh host2
were both executed by
multixterm
, each within its own
xterm
window.
See
Also
man
multixterm
"Enable Quick telnet/SSH Connections
from the Desktop"
[Hack #41]
"Disconnect Your Console Without Ending
Your Session"
[Hack #34]
Updates:
As of
rsync-2.5.6
, the
--link-dest
option is now standard! That can be used instead of the separate
cp -al
and
rsync
stages, and it eliminates the ownerships/permissions bug. I now recommend using
it. Also, I'm proud to report this article is mentioned in
Linux Server Hacks
, a new (and very
good, in my opinion) O'Reilly book by compiled by Rob Flickenger.
This document describes a method for generating automatic rotating "snapshot"-style backups
on a Unix-based system, with specific examples drawn from the author's GNU/Linux experience.
Snapshot backups are a feature of some high-end industrial file servers; they create the
illusion
of multiple, full backups per day without the space or processing overhead. All
of the snapshots are read-only, and are accessible directly by users as special system
directories. It is often possible to store several hours, days, and even weeks' worth of
snapshots with slightly more than 2x storage. This method, while not as space-efficient as
some of the proprietary technologies (which, using special copy-on-write filesystems, can
operate on slightly more than 1x storage), makes use of only standard file utilities and the
common
rsync
program, which is installed by default on
most Linux distributions. Properly configured, the method can also protect against hard disk
failure, root compromises, or even back up a network of heterogeneous desktops automatically.
Note: what follows is the original
sgvlug
DEVSIG
announcement.
Ever accidentally delete or overwrite a file you were working on? Ever lose data due to
hard-disk failure? Or maybe you export shares to your windows-using friends--who proceed to
get outlook viruses that twiddle a digit or two in all of their .xls files. Wouldn't it be
nice if there were a
/snapshot
directory that you could go back to, which had
complete images of the file system at semi-hourly intervals all day, then daily snapshots back
a few days, and maybe a weekly snapshot too? What if every user could just go into that
magical directory and copy deleted or overwritten files back into "reality", from the snapshot
of choice, without any help from you? And what if that
/snapshot
directory were
read-only, like a CD-ROM, so that nothing could touch it (except maybe root, but even then not
directly)?
Best of all, what if you could make all of that happen automatically, using
only one
extra, slightly-larger, hard disk
? (Or one extra partition, which would protect against
all of the above except disk failure).
In my lab, we have a proprietary NetApp file server which provides that sort of
functionality to the end-users. It provides a lot of other things too, but it cost as much as
a luxury SUV. It's quite appropriate for our heavy-use research lab, but it would be overkill
for a home or small-office environment. But that doesn't mean small-time users have to do
without!
I'll show you how I configured automatic, rotating snapshots on my $80 used Linux desktop
machine (which is also a file, web, and mail server) using only a couple of one-page scripts
and a few standard Linux utilities that you probably already have.
I'll also propose a related strategy which employs one (or two, for the wisely paranoid)
extra low-end machines for a complete, responsible, automated backup strategy that eliminates
tapes and manual labor and makes restoring files as easy as "cp".
The
rsync
utility is a very well-known piece of GPL'd software, written
originally by Andrew Tridgell and Paul Mackerras. If you have a common Linux or UNIX variant,
then you probably already have it installed; if not, you can download the source code from
rsync.samba.org
. Rsync's specialty is efficiently
synchronizing file trees across a network, but it works fine on a single machine too.
Basics
Suppose you have a directory called
source
, and you want to back it up into
the directory
destination
. To accomplish that, you'd use:
rsync -a source/ destination/
(Note: I usually also add the
-v
(verbose) flag too so that
rsync
tells me what it's doing). This command is equivalent to:
cp -a source/. destination/
except that it's much more efficient if there are only a few differences.
Just to whet your appetite, here's a way to do the same thing as in the example above, but
with
destination
on a remote machine, over a secure shell:
This isn't really an article about
rsync
, but I would like to take a momentary
detour to clarify one potentially confusing detail about its use. You may be accustomed to
commands that don't care about trailing slashes. For example, if
a
and
b
are two directories, then
cp -a a b
is equivalent to
cp -a a/ b/
.
However,
rsync
does
care about the trailing slash, but only on the
source argument. For example, let
a
and
b
be two directories, with
the file
foo
initially inside directory
a
. Then this command:
rsync -a a b
produces
b/a/foo
, whereas this command:
rsync -a a/ b
produces
b/foo
. The presence or absence of a trailing slash on the destination
argument (
b
, in this case) has no effect.
Using the
--delete
flag
If a file was originally in both
source/
and
destination/
(from
an earlier
rsync
, for example), and you delete it from
source/
, you
probably want it to be deleted from
destination/
on the next
rsync
.
However, the default behavior is to leave the copy at
destination/
in place.
Assuming you want
rsync
to delete any file from
destination/
that is
not in
source/
, you'll need to use the
--delete
flag:
rsync -a --delete source/ destination/
Be lazy: use
cron
One of the toughest obstacles to a good backup strategy is human nature; if there's any
work involved, there's a good chance backups won't happen. (Witness, for example, how rarely
my roommate's home PC was backed up before I created this system). Fortunately, there's a way
to harness human laziness: make
cron
do the work.
To run the rsync-with-backup command from the previous section every morning at 4:20 AM,
for example, edit the root
cron
table: (as root)
crontab -e
Then add the following line:
20 4 * * * rsync -a --delete source/ destination/
Finally, save the file and exit. The backup will happen every morning at precisely 4:20 AM,
and root will receive the output by email. Don't copy that example verbatim, though; you
should use full path names (such as
/usr/bin/rsync
and
/home/source/
)
to remove any ambiguity.
Since making a full copy of a large filesystem can be a time-consuming and expensive
process, it is common to make full backups only once a week or once a month, and store only
changes on the other days. These are called "incremental" backups, and are supported by the
venerable old
dump
and
tar
utilities, along with many others.
However, you don't have to use tape as your backup medium; it is both possible and vastly
more efficient to perform incremental backups with
rsync
.
The most common way to do this is by using the
rsync -b --backup-dir=
combination. I have seen examples of that usage
here
, but I won't discuss it further, because there is a better way. If you're not
familiar with hard links, though, you should first start with the following review.
Review of hard links
We usually think of a file's name as being the file itself, but really the name is a
hard link
. A given file can have more than one hard link to itself--for example, a
directory has at least two hard links: the directory name and
.
(for when you're
inside it). It also has one hard link from each of its sub-directories (the
..
file inside each one). If you have the
stat
utility installed on your machine,
you can find out how many hard links a file has (along with a bunch of other information) with
the command:
stat filename
Hard links aren't just for directories--you can create more than one link to a regular file
too. For example, if you have the file
a
, you can make a link called
b
:
ln a b
Now,
a
and
b
are two names for the same file, as you can verify
by seeing that they reside at the same inode (the inode number will be different on your
machine):
ls -i a
232177 a
ls -i b
232177 b
So
ln a b
is roughly equivalent to
cp a b
, but there are several
important differences:
The contents of the file are only stored once, so you don't use twice the space.
If you change
a
, you're changing
b
, and vice-versa.
If you change the permissions or ownership of
a
, you're changing those of
b
as well, and vice-versa.
If you overwrite
a
by copying a third file on top of it, you will also
overwrite
b
, unless you tell
cp
to unlink before overwriting. You
do this by running
cp
with the
--remove-destination
flag.
Notice that
rsync
always unlinks before overwriting!!
. Note, added
2002.Apr.10: the previous statement applies to changes in the file contents only, not
permissions or ownership.
But this raises an interesting question. What happens if you
rm
one of the
links? The answer is that
rm
is a bit of a misnomer; it doesn't really remove a
file, it just removes that one link to it. A file's contents aren't truly removed until the
number of links to it reaches zero. In a moment, we're going to make use of that fact, but
first, here's a word about
cp
.
Using
cp -al
In the previous section, it was mentioned that hard-linking a file is similar to copying
it. It should come as no surprise, then, that the standard GNU coreutils
cp
command comes with a
-l
flag that causes it to create (hard) links instead of
copies (it doesn't hard-link directories, though, which is good; you might want to think about
why that is). Another handy switch for the
cp
command is
-a
(archive), which causes it to recurse through directories and preserve file owners,
timestamps, and access permissions.
Together, the combination
cp -al
makes
what appears to be
a full copy
of a directory tree, but is really just an illusion that takes almost no space. If we restrict
operations on the copy to adding or removing (unlinking) files--i.e., never changing one in
place--then the illusion of a full copy is complete. To the end-user, the only differences are
that the illusion-copy takes almost no disk space and almost no time to generate.
2002.05.15: Portability tip: If you don't have GNU
cp
installed (if you're
using a different flavor of *nix, for example), you can use
find
and
cpio
instead. Simply replace
cp -al a b
with
cd a && find . -print | cpio -dpl
../b
. Thanks to Brage Førland for that tip.
Putting it all together
We can combine
rsync
and
cp -al
to create what appear to be
multiple full backups of a filesystem without taking multiple disks' worth of space. Here's
how, in a nutshell:
If the above commands are run once every day, then
backup.0
,
backup.1
,
backup.2
, and
backup.3
will appear to each be a full backup of
source_directory/
as it appeared today, yesterday, two days ago, and three days ago,
respectively--complete, except that permissions and ownerships in old snapshots will get their
most recent values (thanks to J.W. Schultz for pointing this out). In reality, the extra
storage will be equal to the current size of
source_directory/
plus the total
size of the changes over the last three days--exactly the same space that a full plus daily
incremental backup with
dump
or
tar
would have taken.
Update (2003.04.23): As of
rsync-2.5.6
, the
--link-dest
flag is
now standard. Instead of the separate
cp -al
and
rsync
lines above,
you may now write:
mv backup.0 backup.1
rsync -a --delete --link-dest=../backup.1 source_directory/ backup.0/
This method is preferred, since it preserves original permissions and ownerships in the
backup. However, be sure to test it--as of this writing some users are still having trouble
getting
--link-dest
to work properly. Make sure you use version 2.5.7 or later.
Update (2003.05.02): John Pelan writes in to suggest recycling the oldest snapshot instead
of recursively removing and then re-creating it. This should make the process go faster,
especially if your file tree is very large:
2003.06.02: OOPS! Rsync's link-dest option does not play well with J. Pelan's
suggestion--the approach I previously had written above will result in unnecessarily large
storage, because old files in backup.0 will get replaced and not linked. Please only use Dr.
Pelan's directory recycling if you use the separate
cp -al
step; if you plan to
use
--link-dest
, start with backup.0 empty and pristine. Apologies to anyone I've
misled on this issue. Thanks to Kevin Everets for pointing out the discrepancy to me, and to
J.W. Schultz for clarifying
--link-dest
's behavior. Also note that I haven't
fully tested the approach written above; if you have, please let me know. Until then, caveat
emptor!
I'm used to
dump
or
tar
! This seems backward!
The
dump
and
tar
utilities were originally designed to write to
tape media, which can only access files in a certain order. If you're used to their style of
incremental backup,
rsync
might seem backward. I hope that the following example
will help make the differences clearer.
Suppose that on a particular system, backups were done on Monday night, Tuesday night, and
Wednesday night, and now it's Thursday.
With
dump
or
tar
, the Monday backup is the big ("full") one. It
contains everything in the filesystem being backed up. The Tuesday and Wednesday "incremental"
backups would be much smaller, since they would contain only changes since the previous day.
At some point (presumably next Monday), the administrator would plan to make another full
dump.
With rsync, in contrast, the Wednesday backup is the big one. Indeed, the "full" backup is
always the
most recent
one. The Tuesday directory would contain data only for those
files that changed between Tuesday and Wednesday; the Monday directory would contain data for
only those files that changed between Monday and Tuesday.
A little reasoning should convince you that the
rsync
way is
much
better for network-based backups, since it's only necessary to do a full backup once, instead
of once per week. Thereafter, only the changes need to be copied. Unfortunately, you can't
rsync to a tape, and that's probably why the
dump
and
tar
incremental backup models are still so popular. But in your author's opinion, these should
never be used for network-based backups now that
rsync
is available.
If you take the simple route and keep your backups in another directory on the same
filesystem, then there's a very good chance that whatever damaged your data will also damage
your backups. In this section, we identify a few simple ways to decrease your risk by keeping
the backup data separate.
The easy (bad) way
In the previous section, we treated
/destination/
as if it were just another
directory on the same filesystem. Let's call that the easy (bad) approach. It works, but it
has several serious limitations:
If your filesystem becomes corrupted, your backups will be corrupted too.
If you suffer a hardware failure, such as a hard disk crash, it might be very difficult
to reconstruct the backups.
Since backups preserve permissions, your users--and any programs or viruses that they
run--will be able to delete files from the backup. That is bad. Backups should be
read-only.
If you run out of free space, the backup process (which runs as root) might crash the
system and make it difficult to recover.
The easy (bad) approach offers no protection if the root account is compromised.
Fortunately, there are several easy ways to make your backup more robust.
Keep it on a separate partition
If your backup directory is on a separate partition, then any corruption in the main
filesystem will not normally affect the backup. If the backup process runs out of disk space,
it will fail, but it won't take the rest of the system down too. More importantly, keeping
your backups on a separate partition means you can keep them mounted read-only; we'll discuss
that in more detail in the next chapter.
Keep that partition on a separate disk
If your backup partition is on a separate hard disk, then you're also protected from
hardware failure. That's very important, since hard disks always fail eventually, and often
take your data with them. An entire industry has formed to service the needs of those whose
broken hard disks contained important data that was not properly backed up.
Important
: Notice, however, that in the event of
hardware failure you'll still lose any changes made since the last backup. For home or small
office users, where backups are made daily or even hourly as described in this document,
that's probably fine, but in situations where any data loss at all would be a serious problem
(such as where financial transactions are concerned), a RAID system might be more appropriate.
RAID is well-supported under Linux, and the methods described in this document can also be
used to create rotating snapshots of a RAID system.
Keep that disk on a separate machine
If you have a spare machine, even a very low-end one, you can turn it into a dedicated
backup server. Make it standalone, and keep it in a physically separate place--another room or
even another building. Disable every single remote service on the backup server, and connect
it only to a dedicated network interface on the source machine.
On the source machine, export the directories that you want to back up via read-only NFS to
the dedicated interface. The backup server can mount the exported network directories and run
the snapshot routines discussed in this article as if they were local. If you opt for this
approach, you'll only be remotely vulnerable if:
a remote root hole is discovered in read-only NFS, and
the source machine has already been compromised.
I'd consider this "pretty good" protection, but if you're (wisely) paranoid, or your job is
on the line, build two backup servers. Then you can make sure that at least one of them is
always offline.
If you're using a remote backup server and can't get a dedicated line to it (especially if
the information has to cross somewhere insecure, like the public internet), you should
probably skip the NFS approach and use
rsync -e ssh
instead.
It has been pointed out to me that
rsync
operates far more
efficiently in server mode than it does over NFS, so if the connection between your source and
backup server becomes a bottleneck, you should consider configuring the backup machine as an
rsync server instead of using NFS. On the downside, this approach is slightly less transparent
to users than NFS--snapshots would not appear to be mounted as a system directory, unless NFS
is used in that direction, which is certainly another option (I haven't tried it yet though).
Thanks to Martin Pool, a lead developer of
rsync
, for making me aware of this
issue.
Here's another example of the utility of this approach--one that I use. If you have a bunch
of windows desktops in a lab or office, an easy way to keep them all backed up is to share the
relevant files, read-only, and mount them all from a dedicated backup server using SAMBA. The
backup job can treat the SAMBA-mounted shares just like regular local directories.
In the previous section, we discussed ways to keep your backup data physically separate
from the data they're backing up. In this section, we discuss the other side of that
coin--preventing user processes from modifying backups once they're made.
We want to avoid leaving the
snapshot
backup directory mounted read-write in a
public place. Unfortunately, keeping it mounted read-only the whole time won't work
either--the backup process itself needs write access. The ideal situation would be for the
backups to be mounted read-only in a public place, but at the same time, read-write in a
private directory accessible only by root, such as
/root/snapshot
.
There are a number of possible approaches to the challenge presented by mounting the
backups read-only. After some amount of thought, I found a solution which allows root to write
the backups to the directory but only gives the users read permissions. I'll first explain the
other ideas I had and why they were less satisfactory.
It's tempting to keep your backup partition mounted read-only as
/snapshot
most of the time, but unmount that and remount it read-write as
/root/snapshot
during the brief periods while snapshots are being made. Don't give in to temptation!.
Bad:
mount
/
umount
A filesystem cannot be unmounted if it's busy--that is, if some process is using it. The
offending process need not be owned by root to block an unmount request. So if you plan to
umount
the read-only copy of the backup and
mount
it read-write
somewhere else, don't--any user can accidentally (or deliberately) prevent the backup from
happening. Besides, even if blocking unmounts were not an issue, this approach would introduce
brief intervals during which the backups would seem to vanish, which could be confusing to
users.
Better:
mount
read-only most of the time
A better but still-not-quite-satisfactory choice is to remount the directory read-write in
place:
mount -o remount,rw /snapshot
[ run backup process ]
mount -o remount,ro /snapshot
Now any process that happens to be in
/snapshot
when the backups start will
not prevent them from happening. Unfortunately, this approach introduces a new problem--there
is a brief window of vulnerability, while the backups are being made, during which a user
process could write to the backup directory. Moreover, if any process opens a backup file for
writing during that window, it will prevent the backup from being remounted read-only, and the
backups will stay vulnerable indefinitely.
Tempting but doesn't seem to work: the 2.4 kernel's
mount --bind
Starting with the 2.4-series Linux kernels, it has been possible to mount a filesystem
simultaneously in two different places. "Aha!" you might think, as I did. "Then surely we can
mount the backups read-only in
/snapshot
, and read-write in
/root/snapshot
at the same time!"
Alas, no. Say your backups are on the partition
/dev/hdb1
. If you run the
following commands,
mount /dev/hdb1 /root/snapshot
mount --bind -o ro /root/snapshot /snapshot
then (at least as of the 2.4.9 Linux kernel--updated, still present in the 2.4.20 kernel),
mount
will report
/dev/hdb1
as being mounted read-write in
/root/snapshot
and read-only in
/snapshot
, just as you requested. Don't
let the system mislead you!
It seems that, at least on my system, read-write vs. read-only is a
property of the filesystem, not the mount point. So every time you change the mount status, it
will affect the status at every point the filesystem is mounted, even though neither
/etc/mtab
nor
/proc/mounts
will indicate the change.
In the example above, the second
mount
call will cause both of the mounts to
become read-only, and the backup process will be unable to run. Scratch this one.
Update: I have it on fairly good authority that this behavior is considered a bug in the
Linux kernel, which will be fixed as soon as someone gets around to it. If you are a kernel
maintainer and know more about this issue, or are willing to fix it, I'd love to hear from
you!
My solution: using NFS on localhost
This is a bit more complicated, but until Linux supports
mount --bind
with
different access permissions in different places, it seems like the best choice. Mount the
partition where backups are stored somewhere accessible only by root, such as
/root/snapshot
. Then export it, read-only, via NFS, but only to the same machine.
That's as simple as adding the following line to
/etc/exports
:
then start
nfs
and
portmap
from
/etc/rc.d/init.d/
.
Finally mount the exported directory, read-only, as
/snapshot
:
mount -o ro 127.0.0.1:/root/snapshot /snapshot
And verify that it all worked:
mount
...
/dev/hdb1 on /root/snapshot type ext3 (rw)
127.0.0.1:/root/snapshot on /snapshot type nfs (ro,addr=127.0.0.1)
At this point, we'll have the desired effect: only root will be able to write to the backup
(by accessing it through
/root/snapshot
). Other users will see only the read-only
/snapshot
directory. For a little extra protection, you could keep mounted
read-only in
/root/snapshot
most of the time, and only remount it read-write
while backups are happening.
Damian Menscher pointed out
this
CERT advisory
which specifically recommends
against
NFS exporting to localhost,
though since I'm not clear on why it's a problem, I'm not sure whether exporting the backups
read-only as we do here is also a problem. If you understand the rationale behind this
advisory and can shed light on it, would you please contact me? Thanks!
With a little bit of tweaking, we make multiple-level rotating snapshots. On my system, for
example, I keep the last four "hourly" snapshots (which are taken every four hours) as well as
the last three "daily" snapshots (which are taken at midnight every day). You might also want
to keep weekly or even monthly snapshots too, depending upon your needs and your available
space.
Keep an extra script for each level
This is probably the easiest way to do it. I keep one script that runs every four hours to
make and rotate hourly snapshots, and another script that runs once a day rotate the daily
snapshots. There is no need to use rsync for the higher-level snapshots; just cp -al from the
appropriate hourly one.
Run it all with
cron
To make the automatic snapshots happen, I have added the following lines to root's
crontab
file:
They cause
make_snapshot.sh
to be run every four hours on the hour and
daily_snapshot_rotate.sh
to be run every day at 13:00 (that is, 1:00 PM). I have
included those scripts in the appendix.
If you tire of receiving an email from the
cron
process every four hours with
the details of what was backed up, you can tell it to send the output of
make_snapshot.sh
to
/dev/null
, like so:
Understand, though, that this will prevent you from seeing errors if
make_snapshot.sh
cannot run for some reason, so be careful with it. Creating a third script to check for any
unusual behavior in the snapshot periodically seems like a good idea, but I haven't
implemented it yet. Alternatively, it might make sense to log the output of each run, by
piping it through
tee
, for example. mRgOBLIN wrote in to suggest a better (and
obvious, in retrospect!) approach, which is to send stdout to /dev/null but keep stderr, like
so:
I know that listing my actual backup configuration here is a security
risk; please be kind and don't use this information to crack my site. However, I'm not a
security expert, so if you see any vulnerabilities in my setup, I'd greatly appreciate your
help in fixing them. Thanks!
I actually use two scripts, one for every-four-hours (hourly) snapshots, and one for
every-day (daily) snapshots. I am only including the parts of the scripts that relate to
backing up
/home
, since those are relevant ones here.
I use the NFS-to-localhost trick of exporting
/root/snapshot
read-only as
/snapshot
, as discussed above.
The system has been running without a hitch for months.
Listing one:
make_snapshot.sh
#!/bin/bash
# ----------------------------------------------------------------------
# mikes handy rotating-filesystem-snapshot utility
# ----------------------------------------------------------------------
# this needs to be a lot more general, but the basic idea is it makes
# rotating backup-snapshots of /home whenever called
# ----------------------------------------------------------------------
unset PATH # suggestion from H. Milz: avoid accidental use of $PATH
# ------------- system commands used by this script --------------------
ID=/usr/bin/id;
ECHO=/bin/echo;
MOUNT=/bin/mount;
RM=/bin/rm;
MV=/bin/mv;
CP=/bin/cp;
TOUCH=/bin/touch;
RSYNC=/usr/bin/rsync;
# ------------- file locations -----------------------------------------
MOUNT_DEVICE=/dev/hdb1;
SNAPSHOT_RW=/root/snapshot;
EXCLUDES=/usr/local/etc/backup_exclude;
# ------------- the script itself --------------------------------------
# make sure we're running as root
if (( `$ID -u` != 0 )); then { $ECHO "Sorry, must be root. Exiting..."; exit; } fi
# attempt to remount the RW mount point as RW; else abort
$MOUNT -o remount,rw $MOUNT_DEVICE $SNAPSHOT_RW ;
if (( $? )); then
{
$ECHO "snapshot: could not remount $SNAPSHOT_RW readwrite";
exit;
}
fi;
# rotating snapshots of /home (fixme: this should be more general)
# step 1: delete the oldest snapshot, if it exists:
if [ -d $SNAPSHOT_RW/home/hourly.3 ] ; then \
$RM -rf $SNAPSHOT_RW/home/hourly.3 ; \
fi ;
# step 2: shift the middle snapshots(s) back by one, if they exist
if [ -d $SNAPSHOT_RW/home/hourly.2 ] ; then \
$MV $SNAPSHOT_RW/home/hourly.2 $SNAPSHOT_RW/home/hourly.3 ; \
fi;
if [ -d $SNAPSHOT_RW/home/hourly.1 ] ; then \
$MV $SNAPSHOT_RW/home/hourly.1 $SNAPSHOT_RW/home/hourly.2 ; \
fi;
# step 3: make a hard-link-only (except for dirs) copy of the latest snapshot,
# if that exists
if [ -d $SNAPSHOT_RW/home/hourly.0 ] ; then \
$CP -al $SNAPSHOT_RW/home/hourly.0 $SNAPSHOT_RW/home/hourly.1 ; \
fi;
# step 4: rsync from the system into the latest snapshot (notice that
# rsync behaves like cp --remove-destination by default, so the destination
# is unlinked first. If it were not so, this would copy over the other
# snapshot(s) too!
$RSYNC \
-va --delete --delete-excluded \
--exclude-from="$EXCLUDES" \
/home/ $SNAPSHOT_RW/home/hourly.0 ;
# step 5: update the mtime of hourly.0 to reflect the snapshot time
$TOUCH $SNAPSHOT_RW/home/hourly.0 ;
# and thats it for home.
# now remount the RW snapshot mountpoint as readonly
$MOUNT -o remount,ro $MOUNT_DEVICE $SNAPSHOT_RW ;
if (( $? )); then
{
$ECHO "snapshot: could not remount $SNAPSHOT_RW readonly";
exit;
} fi;
As you might have noticed above, I have added an excludes list to the
rsync
call. This is just to prevent the system from backing up garbage like web browser caches,
which change frequently (so they'd take up space in every snapshot) but would be no loss if
they were accidentally destroyed.
Listing two:
daily_snapshot_rotate.sh
#!/bin/bash
# ----------------------------------------------------------------------
# mikes handy rotating-filesystem-snapshot utility: daily snapshots
# ----------------------------------------------------------------------
# intended to be run daily as a cron job when hourly.3 contains the
# midnight (or whenever you want) snapshot; say, 13:00 for 4-hour snapshots.
# ----------------------------------------------------------------------
unset PATH
# ------------- system commands used by this script --------------------
ID=/usr/bin/id;
ECHO=/bin/echo;
MOUNT=/bin/mount;
RM=/bin/rm;
MV=/bin/mv;
CP=/bin/cp;
# ------------- file locations -----------------------------------------
MOUNT_DEVICE=/dev/hdb1;
SNAPSHOT_RW=/root/snapshot;
# ------------- the script itself --------------------------------------
# make sure we're running as root
if (( `$ID -u` != 0 )); then { $ECHO "Sorry, must be root. Exiting..."; exit; } fi
# attempt to remount the RW mount point as RW; else abort
$MOUNT -o remount,rw $MOUNT_DEVICE $SNAPSHOT_RW ;
if (( $? )); then
{
$ECHO "snapshot: could not remount $SNAPSHOT_RW readwrite";
exit;
}
fi;
# step 1: delete the oldest snapshot, if it exists:
if [ -d $SNAPSHOT_RW/home/daily.2 ] ; then \
$RM -rf $SNAPSHOT_RW/home/daily.2 ; \
fi ;
# step 2: shift the middle snapshots(s) back by one, if they exist
if [ -d $SNAPSHOT_RW/home/daily.1 ] ; then \
$MV $SNAPSHOT_RW/home/daily.1 $SNAPSHOT_RW/home/daily.2 ; \
fi;
if [ -d $SNAPSHOT_RW/home/daily.0 ] ; then \
$MV $SNAPSHOT_RW/home/daily.0 $SNAPSHOT_RW/home/daily.1; \
fi;
# step 3: make a hard-link-only (except for dirs) copy of
# hourly.3, assuming that exists, into daily.0
if [ -d $SNAPSHOT_RW/home/hourly.3 ] ; then \
$CP -al $SNAPSHOT_RW/home/hourly.3 $SNAPSHOT_RW/home/daily.0 ; \
fi;
# note: do *not* update the mtime of daily.0; it will reflect
# when hourly.3 was made, which should be correct.
# now remount the RW snapshot mountpoint as readonly
$MOUNT -o remount,ro $MOUNT_DEVICE $SNAPSHOT_RW ;
if (( $? )); then
{
$ECHO "snapshot: could not remount $SNAPSHOT_RW readonly";
exit;
} fi;
Sample output of
ls -l /snapshot/home
total 28
drwxr-xr-x 12 root root 4096 Mar 28 00:00 daily.0
drwxr-xr-x 12 root root 4096 Mar 27 00:00 daily.1
drwxr-xr-x 12 root root 4096 Mar 26 00:00 daily.2
drwxr-xr-x 12 root root 4096 Mar 28 16:00 hourly.0
drwxr-xr-x 12 root root 4096 Mar 28 12:00 hourly.1
drwxr-xr-x 12 root root 4096 Mar 28 08:00 hourly.2
drwxr-xr-x 12 root root 4096 Mar 28 04:00 hourly.3
Notice that the contents of each of the subdirectories of
/snapshot/home/
is a
complete image of
/home
at the time the snapshot was made. Despite the
w
in the directory access permissions, no one--not even root--can write to this directory; it's
mounted read-only.
Bugs
Maintaining Permissions and Owners in the snapshots
The snapshot system above does not properly maintain old ownerships/permissions; if a
file's ownership or permissions are changed in place, then the new ownership/permissions will
apply to older snapshots as well. This is because
rsync
does not unlink files
prior to changing them if the only changes are ownership/permission. Thanks to J.W. Schultz
for pointing this out. Using his new
--link-dest
option, it is now trivial to
work around this problem. See the discussion in the
Putting it all together
section
of
Incremental
backups with
rsync
, above.
mv
updates timestamp bug
Apparently, a bug in some Linux kernels between 2.4.4 and 2.4.9 causes
mv
to
update timestamps; this may result in inaccurate timestamps on the snapshot directories.
Thanks to Claude Felizardo for pointing this problem out. He was able to work around the
problem my replacing
mv
with the following script:
I have recently received a few reports of what appear to be interaction issues between
Windows and rsync.
One report came from a user who mounts a windows share via Samba, much as I do, and had
files mysteriously being deleted from the backup even when they weren't deleted from the
source. Tim Burt also used this technique, and was seeing files copied even when they hadn't
changed. He determined that the problem was modification time precision; adding
--modify-window=10 caused rsync to behave correctly in both cases.
If you are rsync'ing
from a SAMBA share, you must add --modify-window=10
or you may get inconsistent results.
Update: --modify-window=1 should be sufficient. Yet another update: the problem appears to
still be there. Please let me know if you use this method and files which should not be
deleted are deleted.
Also, for those who use rsync directly on cygwin, there are some known problems, apparently
related to cygwin signal handling. Scott Evans reports that rsync sometimes hangs on large
directories. Jim Kleckner informed me of an rsync patch, discussed
here
and
here
, which seems to
work around this problem. I have several reports of this working, and two reports of it not
working (the hangs continue). However, one of the users who reported a negative outcome, Greg
Boyington, was able to get it working using Craig Barrett's suggested sleep() approach, which
is documented
here
.
Memory use in rsync scales linearly with the number of files being sync'd. This is a
problem when syncing large file trees, especially when the server involved does not have a lot
of RAM. If this limitation is more of an issue to you than network speed (for example, if you
copy over a LAN), you may wish to use
mirrordir
instead. I haven't tried it personally, but it looks promising. Thanks to Vladimir Vuksan for
this tip!
Several people have been kind enough to send improved backup scripts. There are a number of
good ideas here, and I hope they'll save you time when you're ready to design your own backup
plan. Disclaimer: I have not necessarily tested these; make sure you check the source code and
test them thoroughly before use!
Rob Bos' versatile, GPL'd
shell
script
.
Update!
2002.12.13: check out his
new package
that makes for easier configuration and fixes a couple of bugs.
Leland Elie's very nice GPL'd Python script,
roller.py
(2004.04.13: note link
seems to be down). Does locking for safety, has a
/etc/roller.conf
control
script which can pull from multiple machines automatically and independently.
John Bowman's
rlbackup
utility, which (in his words) provides a simple secure mechanism for generating and
recovering linked backups over the network, with historical pruning. This one makes use of
the --link-dest patch, and keeps a geometric progression of snapshots instead of doing
hourly/daily/weekly.
Darrel O'Pry contributes a
script
modified to handle mysql databases. Thanks, Darrel! He also contributes a
restore script
which works with Geordy Kitchen's backup script.
Craig Jones
contributes
a modified and enhanced version of make_snapshot.sh.
Here is a very schnazzy
perl
script
from Bart Vetters with built-in POD documentation
Stuart Sheldon has contributed
mirror.dist
, a
substantial improvement to the original shell script.
rdiff-backup
, Ben Escoto's remote
incremental backup utility
The GNU coreutils package
(which
includes the part formerly known as fileutils, thanks to Nathan Rosenquist for pointing
that out to me).
dirvish
, a similar but slightly more
sophisticated tool from J.W. Schultz.
rsback
, a backup front-end for
rsync, by Hans-Juergen Beie.
ssync
, a simple sync utility which
can be used instead of rsync in certain cases. Thanks to Patrick Finerty Jr. for the link.
bobs
, the Browseable Online Backup System, with a
snazzy web interface; I look forward to trying it! Thanks to Rene Rask.
LVM
, the Logical Volume Manager
for Linux. In the context of LVM,
snapshot
means one image of the filesystem,
frozen in time. Might be used in conjunction with some of the methods described on this
page.
glastree
, a very nice snapshot-style backup
utility from Jeremy Wohl
mirrordir
, a less memory-intensive (but
more network-intensive) way to do the copying.
A filesystem-level backup utility, rumored to be similar to Glastree and very complete
and usable:
storebackup
.
Thanks to Arthur Korn for the link!
Gary Burd has posted a
page
which discusses how to use this sort of technique to back up laptops. He includes a very
nice python script with it.
Jason Rust implemented something like this in a php script called RIBS. You can find it
here
. Thanks Jason!
Robie Basak pointed out to me that debian's fakeroot utility can help protect a backup
server even if one of the machines it's backing up is compromised and an exploitable hole
is discovered in rsync (this is a bit of a long shot, but in the backup business you really
do have to be paranoid). He sent me this
script
along with
this note
explaining it.
Michael Mayer wrote a handy and similar tutorial which is rather nicer than this
one--has screenshots and everything! You can find it
here
.
The
rsnapshot project
by Nathan Rosenquist which
provides several extensions and features beyond the basic script here, and is really
organized--it seems to be at a level which makes it more of a real package than a
do-it-yourself hack like this page is. Check it out!
Mike Heins wrote
Snapback2
, a
highly improved adapation of Art Mulder's original script, which includes (among other
features) an apache-style configuration file, multiple redundant backup destinations, and
safety features.
Poul Petersen's
Wombat
backup
system, written in Perl, supports threading for multiple simultaneous backups.
Q: What happens if a file is modified while the backup is taking place?
A: In rsync, transfers are done to a temporary file, which is cut over atomically,
so the transfer either happens in its entirety or not at all. Basically, rsync does "the
right thing," so you won't end up with partially-backed-up files. Thanks to Filippo
Carletti for pointing this out. If you absolutely need a snapshot from a single instant
in time, consider using Sistina's LVM (see reference above).
Q: I really need the original permissions and ownerships in the snapshots, and not
the latest ones. How can I accomplish that?
A: J.W. Schultz has created a --link-dest patch for rsync which takes care of the
hard-linking part of this trick (instead of cp -al). It can preserve permissions and
ownerships. As of
rsync-2.5.6
, it is now standard. See the discussion
above.
Q: I am backing up a cluster of machines (clients) to a backup server (server).
What's the best way to pull data from each machine in the cluster?
A: Run sshd on each machine in the cluster. Create a passwordless key pair on the
server, and give the public key to each of the client machines, restricted to the rsync
command only (with PermitRootLogin set to forced-commands-only in the sshd_config file).
Q: I am backing up many different machines with user accounts not necessarily shared
by the backup server. How should I handle this?
A: Be sure to use the
--numeric-ids
option to rsync so that ownership
is not confused on the restore. Thanks to
Jon Jensen
for this tip!
Q: Can I see a nontrivial example involving rsync include an exclude rules?
A: Martijn Kruissen sent in an email which includes a nice example; I've posted part
of it
here
.
A caching-only name server is used for looking up zone data and caching (storing) the result which
is returned. Then it can return the answers to subsequent queries by using the cached information.
A caching-only server is authoritative only for the local host i.e 0.0.127.in-addr.arpa, but it
can automatically send requests to the Internet host handling name lookups for the domain in question.
In most situations, a caching-only name server sends queries directly to the name server that
contains the answer. Because of its simplified nature, a DNS zone file is not created for a caching-only
name server.
Running the Caching-only Name Server in an chroot environment is a secure approach. The chroot
environment has more security compared to the normal environment.
To configure the /etc/named.conf file for a simple caching name server, use this configuration
for all servers that don't act as a master or slave name server. Setting up a simple caching server
for local client machines will reduce the load on the network's primary server. Many users on DSL
connections may use this configuration along with bind for such a purpose. Ensure that the file /etc/named.conf
highlights the entries below:
Terminal Window View
options {
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
forwarders { 192.168.1.1; 192.168.1.100; };
forward only;
};
// a caching only nameserver config
controls {
inet 127.0.0.1 allow { localhost; } keys { rndckey; };
};
zone "." IN {
type hint;
file "named.ca";
};
zone "0.0.127.in-addr.arpa" IN {
type master;
file "named.local";
allow-update { none; };
};
With the forwarders option, 192.168.1.1 and 192.168.1.100 are the IP addresses of the Primary/Master
and Secondary/Slave DNS server on the network in question. They can also be the IP addresses of the
ISPs DNS server and another DNS server, respectively. With the forward only option set in the named.conf
file, the name server doesn't try to contact other servers to find out information if the forwarders
does not give it an answer. To test this setup try the following commands
We have now turned on the named server for persistent reboots and started the service for the
current session. Now to test whether the caching - named server is working or not lets see:
nslookup now asked the named to look for the machine www.redhat.com. It then contacted one of
the name server machines named in the root.cache file, and asked it's way from there. It might take
a while before the result is shown, as it searches all the domains the user entered in /etc/resolve.conf.
When tried again, the result should be similar to this example:
Note the Non-authoritative answer in the result this time. This means that named did not go out
on the network to ask this time, it instead looked up in its cache and found it there. But the cached
information might be out of date. So the user is informed of this danger by it saying Non-authoritative
answer. When nslookup says this the second time when a user ask for a host, it is a sign that it
caches the information and that it's working. Now exit nslookup by giving the command exit.
In such cases the UID of the file is often different from uid of "legitimate" files in polluted directories and you probably can
use this fact for quick elimination of the tar bomb, But the idea of using the list of files from the tar bomb to eliminate offending
files also works if you observe some precautions -- some directories that were created can have the same names as existing directories.
Never do rm in -exec or via xargs without testing.
Notable quotes:
"... You don't want to just rm -r everything that tar tf tells you, since it might include directories that were not empty before unpacking! ..."
"... Another nice trick by @glennjackman, which preserves the order of files, starting from the deepest ones. Again, remove echo when done. ..."
"... One other thing: you may need to use the tar option --numeric-owner if the user names and/or group names in the tar listing make the names start in an unpredictable column. ..."
"... That kind of (antisocial) archive is called a tar bomb because of what it does. Once one of these "explodes" on you, the solutions in the other answers are way better than what I would have suggested. ..."
"... The easiest (laziest) way to do that is to always unpack a tar archive into an empty directory. ..."
"... The t option also comes in handy if you want to inspect the contents of an archive just to see if it has something you're looking for in it. If it does, you can, optionally, just extract the file(s) you want. ..."
This can be piped to xargs directly, but beware : do the deletion very carefully. You don't want to just rm -r
everything that tar tf tells you, since it might include directories that were not empty before unpacking!
You could do
tar tf archive.tar | xargs -d'\n' rm -v
tar tf archive.tar | sort -r | xargs -d'\n' rmdir -v
to first remove all files that were in the archive, and then the directories that are left empty.
sort -r (glennjackman suggested tac instead of sort -r in the comments to the accepted
answer, which also works since tar 's output is regular enough) is needed to delete the deepest directories first; otherwise
a case where dir1 contains a single empty directory dir2 will leave dir1 after the rmdir
pass, since it was not empty before dir2 was removed.
This will generate a lot of
rm: cannot remove `dir/': Is a directory
and
rmdir: failed to remove `dir/': Directory not empty
rmdir: failed to remove `file': Not a directory
Shut this up with 2>/dev/null if it annoys you, but I'd prefer to keep as much information on the process as possible.
And don't do it until you are sure that you match the right files. And perhaps try rm -i to confirm everything. And
have backups, eat your breakfast, brush your teeth, etc.
===
List the contents of the tar file like so:
tar tzf myarchive.tar
Then, delete those file names by iterating over that list:
while IFS= read -r file; do echo "$file"; done < <(tar tzf myarchive.tar.gz)
This will still just list the files that would be deleted. Replace echo with rm if you're really sure these are the ones you want
to remove. And maybe make a backup to be sure.
In a second pass, remove the directories that are left over:
while IFS= read -r file; do rmdir "$file"; done < <(tar tzf myarchive.tar.gz)
This prevents directories with from being deleted if they already existed before.
Another nice trick by @glennjackman, which preserves the order of files, starting from the deepest ones. Again, remove echo
when done.
tar tvf myarchive.tar | tac | xargs -d'\n' echo rm
This could then be followed by the normal rmdir cleanup.
Here's a possibility that will take the extracted files and move them to a subdirectory, cleaning up your main folder.
#!/usr/bin/perl -w
use strict ;
use Getopt :: Long ;
my $clean_folder = "clean" ;
my $DRY_RUN ;
die "Usage: $0 [--dry] [--clean=dir-name]\n"
if ( ! GetOptions ( "dry!" => \$DRY_RUN ,
"clean=s" => \$clean_folder ));
# Protect the 'clean_folder' string from shell substitution
$clean_folder =~ s / '/' \\ '' / g ;
# Process the "tar tv" listing and output a shell script.
print "#!/bin/sh\n" if ( ! $DRY_RUN );
while (<>)
{
chomp ;
# Strip out permissions string and the directory entry from the 'tar' list
my $perms = substr ( $_ , 0 , 10 );
my $dirent = substr ( $_ , 48 );
# Drop entries that are in subdirectories
next if ( $dirent =~ m :/.: );
# If we're in "dry run" mode, just list the permissions and the directory
# entries.
#
if ( $DRY_RUN )
{
print "$perms|$dirent\n" ;
next ;
}
# Emit the shell code to clean up the folder
$dirent =~ s / '/' \\ '' / g ;
print "mv -i '$dirent' '$clean_folder'/.\n" ;
}
Save this to the file fix-tar.pl and then execute it like this:
$ tar tvf myarchive . tar | perl fix - tar . pl -- dry
This will confirm that your tar list is like mine. You should get output like:
- rw - rw - r --| batch
- rw - rw - r --| book - report . png
- rwx ------| CaseReports . png
- rw - rw - r --| caseTree . png
- rw - rw - r --| tree . png
drwxrwxr - x | sample /
If that looks good, then run it again like this:
$ mkdir cleanup
$ tar tvf myarchive . tar | perl fix - tar . pl -- clean = cleanup > fixup . sh
The fixup.sh script will be the shell commands that will move the top-level files and directories into a "clean"
folder (in this instance, the folder called cleanup). Have a peek through this script to confirm that it's all kosher.
If it is, you can now clean up your mess with:
$ sh fixup . sh
I prefer this kind of cleanup because it doesn't destroy anything that isn't already destroyed by being overwritten by that initial
tar xv.
Note: if that initial dry run output doesn't look right, you should be able to fiddle with the numbers in the two substr
function calls until they look proper. The $perms variable is used only for the dry run so really only the $dirent
substring needs to be proper.
One other thing: you may need to use the tar option --numeric-owner if the user names and/or group names
in the tar listing make the names start in an unpredictable column.
One other thing: you may need to use the tar option --numeric-owner if the user names and/or group
names in the tar listing make the names start in an unpredictable column.
===
That kind of (antisocial) archive is called a tar bomb because of what it does. Once one of these "explodes" on you, the solutions
in the other answers are way better than what I would have suggested.
The best "solution", however, is to prevent the problem in the first place.
The easiest (laziest) way to do that is to always unpack a tar archive into an empty directory. If it includes a top
level directory, then you just move that to the desired destination. If not, then just rename your working directory (the one that
was empty) and move that to the desired location.
If you just want to get it right the first time, you can run tar -tvf archive-file.tar | less and it will list the contents of
the archive so you can see how it is structured and then do what is necessary to extract it to the desired location to start with.
The t option also comes in handy if you want to inspect the contents of an archive just to see if it has something you're
looking for in it. If it does, you can, optionally, just extract the file(s) you want.
In such cases the UID of the file is often different from uid of "legitimate" files in polluted directories and you probably can
use this fact for quick elimination of the tar bomb, But the idea of using the list of files from the tar bomb to eliminate offending
files also works if you observe some precautions -- some directories that were created can have the same names as existing directories.
Never do rm in -exec or via xargs without testing.
Notable quotes:
"... You don't want to just rm -r everything that tar tf tells you, since it might include directories that were not empty before unpacking! ..."
"... Another nice trick by @glennjackman, which preserves the order of files, starting from the deepest ones. Again, remove echo when done. ..."
"... One other thing: you may need to use the tar option --numeric-owner if the user names and/or group names in the tar listing make the names start in an unpredictable column. ..."
"... That kind of (antisocial) archive is called a tar bomb because of what it does. Once one of these "explodes" on you, the solutions in the other answers are way better than what I would have suggested. ..."
"... The easiest (laziest) way to do that is to always unpack a tar archive into an empty directory. ..."
"... The t option also comes in handy if you want to inspect the contents of an archive just to see if it has something you're looking for in it. If it does, you can, optionally, just extract the file(s) you want. ..."
This can be piped to xargs directly, but beware : do the deletion very carefully. You don't want to just rm -r
everything that tar tf tells you, since it might include directories that were not empty before unpacking!
You could do
tar tf archive.tar | xargs -d'\n' rm -v
tar tf archive.tar | sort -r | xargs -d'\n' rmdir -v
to first remove all files that were in the archive, and then the directories that are left empty.
sort -r (glennjackman suggested tac instead of sort -r in the comments to the accepted
answer, which also works since tar 's output is regular enough) is needed to delete the deepest directories first; otherwise
a case where dir1 contains a single empty directory dir2 will leave dir1 after the rmdir
pass, since it was not empty before dir2 was removed.
This will generate a lot of
rm: cannot remove `dir/': Is a directory
and
rmdir: failed to remove `dir/': Directory not empty
rmdir: failed to remove `file': Not a directory
Shut this up with 2>/dev/null if it annoys you, but I'd prefer to keep as much information on the process as possible.
And don't do it until you are sure that you match the right files. And perhaps try rm -i to confirm everything. And
have backups, eat your breakfast, brush your teeth, etc.
===
List the contents of the tar file like so:
tar tzf myarchive.tar
Then, delete those file names by iterating over that list:
while IFS= read -r file; do echo "$file"; done < <(tar tzf myarchive.tar.gz)
This will still just list the files that would be deleted. Replace echo with rm if you're really sure these are the ones you want
to remove. And maybe make a backup to be sure.
In a second pass, remove the directories that are left over:
while IFS= read -r file; do rmdir "$file"; done < <(tar tzf myarchive.tar.gz)
This prevents directories with from being deleted if they already existed before.
Another nice trick by @glennjackman, which preserves the order of files, starting from the deepest ones. Again, remove echo
when done.
tar tvf myarchive.tar | tac | xargs -d'\n' echo rm
This could then be followed by the normal rmdir cleanup.
Here's a possibility that will take the extracted files and move them to a subdirectory, cleaning up your main folder.
#!/usr/bin/perl -w
use strict ;
use Getopt :: Long ;
my $clean_folder = "clean" ;
my $DRY_RUN ;
die "Usage: $0 [--dry] [--clean=dir-name]\n"
if ( ! GetOptions ( "dry!" => \$DRY_RUN ,
"clean=s" => \$clean_folder ));
# Protect the 'clean_folder' string from shell substitution
$clean_folder =~ s / '/' \\ '' / g ;
# Process the "tar tv" listing and output a shell script.
print "#!/bin/sh\n" if ( ! $DRY_RUN );
while (<>)
{
chomp ;
# Strip out permissions string and the directory entry from the 'tar' list
my $perms = substr ( $_ , 0 , 10 );
my $dirent = substr ( $_ , 48 );
# Drop entries that are in subdirectories
next if ( $dirent =~ m :/.: );
# If we're in "dry run" mode, just list the permissions and the directory
# entries.
#
if ( $DRY_RUN )
{
print "$perms|$dirent\n" ;
next ;
}
# Emit the shell code to clean up the folder
$dirent =~ s / '/' \\ '' / g ;
print "mv -i '$dirent' '$clean_folder'/.\n" ;
}
Save this to the file fix-tar.pl and then execute it like this:
$ tar tvf myarchive . tar | perl fix - tar . pl -- dry
This will confirm that your tar list is like mine. You should get output like:
- rw - rw - r --| batch
- rw - rw - r --| book - report . png
- rwx ------| CaseReports . png
- rw - rw - r --| caseTree . png
- rw - rw - r --| tree . png
drwxrwxr - x | sample /
If that looks good, then run it again like this:
$ mkdir cleanup
$ tar tvf myarchive . tar | perl fix - tar . pl -- clean = cleanup > fixup . sh
The fixup.sh script will be the shell commands that will move the top-level files and directories into a "clean"
folder (in this instance, the folder called cleanup). Have a peek through this script to confirm that it's all kosher.
If it is, you can now clean up your mess with:
$ sh fixup . sh
I prefer this kind of cleanup because it doesn't destroy anything that isn't already destroyed by being overwritten by that initial
tar xv.
Note: if that initial dry run output doesn't look right, you should be able to fiddle with the numbers in the two substr
function calls until they look proper. The $perms variable is used only for the dry run so really only the $dirent
substring needs to be proper.
One other thing: you may need to use the tar option --numeric-owner if the user names and/or group names
in the tar listing make the names start in an unpredictable column.
One other thing: you may need to use the tar option --numeric-owner if the user names and/or group
names in the tar listing make the names start in an unpredictable column.
===
That kind of (antisocial) archive is called a tar bomb because of what it does. Once one of these "explodes" on you, the solutions
in the other answers are way better than what I would have suggested.
The best "solution", however, is to prevent the problem in the first place.
The easiest (laziest) way to do that is to always unpack a tar archive into an empty directory. If it includes a top
level directory, then you just move that to the desired destination. If not, then just rename your working directory (the one that
was empty) and move that to the desired location.
If you just want to get it right the first time, you can run tar -tvf archive-file.tar | less and it will list the contents of
the archive so you can see how it is structured and then do what is necessary to extract it to the desired location to start with.
The t option also comes in handy if you want to inspect the contents of an archive just to see if it has something you're
looking for in it. If it does, you can, optionally, just extract the file(s) you want.
The variable CDPATH defines the search path for the directory containing directories. So it served much like "directories
home". The dangers are in creating too complex CDPATH. Often a single directory works best. For example export CDPATH = /srv/www/public_html
. Now, instead of typing cd /srv/www/public_html/CSS I can simply type: cd CSS
Use CDPATH to access frequent directories in bash
Mar 21, '05 10:01:00AM • Contributed by:
jonbauman
I often find myself wanting to cd to the various directories beneath my home directory (i.e. ~/Library, ~/Music, etc.),
but being lazy, I find it painful to have to type the ~/ if I'm not in my home directory already. Enter CDPATH
, as desribed in man bash ):
The search path for the cd command. This is a colon-separated list of directories in which the shell looks for destination
directories specified by the cd command. A sample value is ".:~:/usr".
Personally, I use the following command (either on the command line for use in just that session, or in .bash_profile
for permanent use):
CDPATH=".:~:~/Library"
This way, no matter where I am in the directory tree, I can just cd dirname , and it will take me to the directory that
is a subdirectory of any of the ones in the list. For example:
$ cd
$ cd Documents
/Users/baumanj/Documents
$ cd Pictures
/Users/username/Pictures
$ cd Preferences
/Users/username/Library/Preferences
etc...
[ robg adds: No, this isn't some deeply buried treasure of OS X, but I'd never heard of the CDPATH variable, so
I'm assuming it will be of interest to some other readers as well.]
cdable_vars is also nice
Authored by: clh on Mar 21, '05 08:16:26PM
Check out the bash command shopt -s cdable_vars
From the man bash page:
cdable_vars
If set, an argument to the cd builtin command that is not a directory is assumed to be the name of a variable whose value
is the directory to change to.
With this set, if I give the following bash command:
export d="/Users/chap/Desktop"
I can then simply type
cd d
to change to my Desktop directory.
I put the shopt command and the various export commands in my .bashrc file.
"... But human life depends on whether the accident is caused by a human or not, and the level of intent. It isn't just a case of the price - the law is increasingly locking people up for driving negligence (rightly in my mind) Who gets locked up when the program fails? Or when the program chooses to hit one person and not another in a complex situation? ..."
Electric,
driverless shuttles with no steering wheel and no brake pedal are now operating in Las Vegas.
There's a new thrill on the streets of downtown Las Vegas, where high- and low-rollers alike are
climbing aboard what officials call the first driverless electric shuttle operating on a public U.S.
street.
The oval-shaped shuttle began running Tuesday as part of a 10-day pilot program, carrying up to
12 passengers for free along a short stretch of the Fremont Street East entertainment district.
The vehicle has a human attendant and computer monitor, but no steering wheel and no brake pedals.
Passengers push a button at a marked stop to board it.
The shuttle uses GPS, electronic curb sensors and other technology, and doesn't require lane lines
to make its way.
"The ride was smooth. It's clean and quiet and seats comfortably," said Mayor Carolyn Goodman,
who was among the first public officials to hop a ride on the vehicle developed by the French company
Navya and dubbed Arma.
"I see a huge future for it once they get the technology synchronized," the mayor said Friday.
The top speed of the shuttle is 25 mph, but it's running about 15 mph during the trial, Navya
spokesman Martin Higgins said.
Higgins called it "100 percent autonomous on a programmed route."
"If a person or a dog were to run in front of it, it would stop," he said.
Higgins said it's the company's first test of the shuttle on a public street in the U.S. A similar
shuttle began testing in December at a simulated city environment at a University of Michigan research
center.
The vehicle being used in public was shown earlier at the giant CES gadget show just off the Las
Vegas Strip.
Las Vegas city community development chief Jorge Cervantes said plans call for installing transmitters
at the Fremont Street intersections to communicate red-light and green-light status to the shuttle.
He said the city hopes to deploy several autonomous shuttle vehicles - by Navya or another company
- later this year for a downtown loop with stops at shopping spots, restaurants, performance venues,
museums, a hospital and City Hall.
At a cost estimated at $10,000 a month, Cervantes said the vehicle could be cost-efficient compared
with a single bus and driver costing perhaps $1 million a year.
The company said it has shuttles in use in France, Australia, Switzerland and other countries
that have carried more than 100,000 passengers in more than a year of service.
Don't Worry Tax Drivers
Don't worry taxi drivers because some of my readers say
1.This will never work
2.There is no demand
3.Technology cost will be too high
4.Insurance cost will be too high
5.The unions will not allow it
6.It will not be reliable
7.Vehicles will be stolen
8.It cannot handle snow, ice, or any adverse weather.
9.It cannot handle dogs, kids, or 80-year old men on roller skates who will suddenly veer into
traffic causing a clusterfack that will last days.
10.This is just a test, and testing will never stop.
Real World Analysis
Those in the real world expect millions of long haul truck driving jobs will vanish by 2020-2022
and massive numbers of taxi job losses will happen simultaneously or soon thereafter.
Yes, I bumped up my timeline by two years (from 2022-2024 to 2020-2022) for this sequence of
events.
My new timeline is not all tremendously optimistic given the rapid changes we have seen.
garypaul -> Sudden Debt •Jan 14, 2017 7:56 PM
You're getting carried away Sudden Debt. This robot stuff works great in the lab/test
zones. Whether it is transplantable on a larger scale is still unknown. The interesting thing
is, all my friends who are computer programmers/engineers/scientists are skeptical about this
stuff, but all my friends who know nothing about computer science are absolutely wild about
the "coming age of robots/AI". Go figure.
P.S. Of course the computer experts that are milking investment money with their start-ups
will tell you it's great
ChartreuseDog -> garypaul •Jan 14, 2017 9:15 PM
I'm an engineer (well, OK, an electrical engineering technical team lead). I've been an
electronics and embedded computer engineer for about 4 decades.
This Vegas thing looks real - predefined route, transmitted signals for traffic lights, like
light rail without the rails.
Overall, autonomous driving looks like it's almost here, if you like spinning LIDAR
transceivers on the top of cars.
Highway driving is much closer to being solved, by the way. It's suburban and urban side
streets that are the tough stuff.
garypaul -> ChartreuseDog •Jan 14, 2017 9:22 PM
"Highway driving is much closer to being solved".
That's my whole point. It's not an equation that you "solve". It's a million unexpected
things. Last I heard, autonomous cars were indeed already crashing.
MEFOBILLS -> CRM114 •Jan 14, 2017 6:07 PM
Who gets sued? For how much? What about cases where a human driver wouldn't have
killed anybody?
I've been in corporate discussions about this very topic. At a corporation that makes this
technology by the way. The answer:
Insurance companies and the law will figure it out. Basically, if somebody gets run
over, then the risk does not fall on the technology provider. Corporate rules can be
structured to prevent piercing the corporate veil on this.
Human life does have a price. Insurance figures out how much it costs to pay off, and then
jacks up rates accordingly.
CRM114 -> MEFOBILLS •Jan 14, 2017 6:20 PM
Thanks, that's interesting, although I must say that isn't a solution, it's a hope that
someone else will come up with one.
But human life depends on whether the accident is caused by a human or not, and the level
of intent. It isn't just a case of the price - the law is increasingly locking people up for
driving negligence (rightly in my mind) Who gets locked up when the program fails? Or when the
program chooses to hit one person and not another in a complex situation?
At the moment, corporate manslaughter laws are woefully inadequate. There's clearly one law
for the rich and another for everyone else. Mary Barra would be wearing an orange jumpsuit
otherwise.
I am unaware of any automatic machinery which operates in public areas and carries
significant risk. Where accidents have happened in the past(e.g.elevators), either the
machinery gets changed to remove the risk, or use is discontinued, or the public is separated
from the machinery. I don't think any of these are possible for automatic vehicles.
TuPhat -> shovelhead •Jan 14, 2017 7:53 PM
Elevators have no choice of route, only how high or low you want to go. autos have no
comparison. Disney world has had many robotic attractions for decades but they are still only
entertainment. keep entertaining yourself Mish. when I see you on the road I will easily pass
you by.
MEFOBILLS -> Hulk •Jan 14, 2017 6:12 PM
The future is here: See movie "obsolete" on Amazon. Free if you have prime.
This is so exciting! Just think about the possibilities here... Shuttles could be outfitted
with all kinds of great gizmos to identify their passengers based on RFID chips in credit
cards, facial recognition software, voice prints, etc. Then, depending on who is controlling
the software, the locks on the door could engage and the shuttle could drive around town
dropping of its passengers to various locations eager for their arrival. Trivial to round up
illegal aliens, parole violators, or people with standing warrants for arrest. Equally easy to
nab people who are delinquent on their taxes, credit cards, mortgages, and spousal support.
With a little info from Facebook or Google, a drop-off at the local attitude-adjustment
facility might be desirable for those who frequent alternative media or have unhealthy
interests in conspiracy theories or the activities at pizza parlors. Just think about the
wonderful possibilties here!
Twee Surgeon -> PitBullsRule •Jan 14, 2017 6:29 PM
Will unemployed taxi drivers be allowed on the bus with a bottle of vodka and a gallon of
gas with a rag in it ?
When the robot trucks arrive at the robot factory and are unloaded by robot forklifts, who
will buy the end products ?
It won't be truck drivers, taxi drivers or automated production line workers.
The only way massive automation would work is if some people were planning on a vastly reduced
population in the future. It has happened before, they called it the Black Death. The Cultural
and Economic consequences of it in Europe were enormous, world changing and permanent.
"... The unionization rate has plummeted over the last four decades, but this is the result of policy decisions, not automation. Canada, a country with a very similar economy and culture, had no remotely comparable decline in unionization over this period. ..."
"... The unemployment rate and overall strength of the labor market is also an important factor determining workers' ability to secure their share of the benefits of productivity growth in wages and other benefits. When the Fed raises interest rates to deliberately keep workers from getting jobs, this is not the result of automation. ..."
"... It is also not automation alone that allows some people to disproportionately get the gains from growth. The average pay of doctors in the United States is over $250,000 a year because they are powerful enough to keep out qualified foreign doctors. They require that even established foreign doctors complete a U.S. residency program before they are allowed to practice medicine in the United States. If we had a genuine free market in physicians' services every MRI would probably be read by a much lower paid radiologist in India rather than someone here pocketing over $400,000 a year. ..."
Weak Labor Market: President Obama Hides Behind Automation
It really is shameful how so many people, who certainly should know better, argue that automation
is the factor depressing the wages of large segments of the workforce and that education (i.e.
blame the ignorant workers) is the solution. President Obama takes center stage in this picture
since he said almost exactly this in his farewell address earlier in the week. This misconception
is repeated in a Claire Cain Miller's New York Times column * today. Just about every part of
the story is wrong.
Starting with the basic story of automation replacing workers, we have a simple way of measuring
this process, it's called "productivity growth." And contrary to what the automation folks tell
you, productivity growth has actually been very slow lately.
[Graph]
The figure above shows average annual rates of productivity growth for five year periods, going
back to 1952. As can be seen, the pace of automation (productivity growth) has actually been quite
slow in recent years. It is also projected by the Congressional Budget Office and most other forecasters
to remain slow for the foreseeable future, so the prospect of mass displacement of jobs by automation
runs completely counter to what we have been seeing in the labor market.
Perhaps more importantly the idea that productivity growth is bad news for workers is 180 degrees
at odds with the historical experience. In the period from 1947 to 1973, productivity growth averaged
almost 3.0 percent, yet the unemployment rate was generally low and workers saw rapid wage gains.
The reason was that workers had substantial bargaining power, in part because of strong unions,
and were able to secure the gains from productivity growth for themselves in higher living standards,
including more time off in the form of paid vacation days and paid sick days. (Shorter work hours
sustain the number of jobs in the face rising productivity.)
The unionization rate has plummeted over the last four decades, but this is the result
of policy decisions, not automation. Canada, a country with a very similar economy and culture,
had no remotely comparable decline in unionization over this period.
The unemployment rate and overall strength of the labor market is also an important factor
determining workers' ability to secure their share of the benefits of productivity growth in wages
and other benefits. When the Fed raises interest rates to deliberately keep workers from getting
jobs, this is not the result of automation.
It is also not automation alone that allows some people to disproportionately get the gains
from growth. The average pay of doctors in the United States is over $250,000 a year because they
are powerful enough to keep out qualified foreign doctors. They require that even established
foreign doctors complete a U.S. residency program before they are allowed to practice medicine
in the United States. If we had a genuine free market in physicians' services every MRI would
probably be read by a much lower paid radiologist in India rather than someone here pocketing
over $400,000 a year.
Similarly, automation did not make our patents and copyrights longer and stronger. These
protectionist measures result in us paying over $430 billion a year for drugs that would likely
cost one tenth of this amount in a free market. And automation did not force us to institutionalize
rules that created an incredibly bloated financial sector with Wall Street traders and hedge fund
partners pocketing tens of millions or even hundreds of millions a year. Nor did automation give
us a corporate governance structure that allows even the most incompetent CEOs to rip off their
companies and pay themselves tens of millions a year.
Yes, these and other topics are covered in my (free) book "Rigged: How Globalization and the
Rules of the Modern Economy Were Structured to Make the Rich Richer." ** It is understandable
that the people who benefit from this rigging would like to blame impersonal forces like automation,
but it just ain't true and the people repeating this falsehood should be ashamed of themselves.
A Darker Theme in Obama's Farewell: Automation Can
Divide Us https://nyti.ms/2ioACof via @UpshotNYT
NYT - Claire Cain Miller - January 12, 2017
Underneath the nostalgia and hope in President Obama's farewell address Tuesday night was a
darker theme: the struggle to help the people on the losing end of technological change.
"The next wave of economic dislocations won't come from overseas," Mr. Obama said. "It will
come from the relentless pace of automation that makes a lot of good, middle-class jobs obsolete."
Donald J. Trump has tended to blamed trade, offshoring and immigration. Mr. Obama acknowledged
those things have caused economic stress. But without mentioning Mr. Trump, he said they divert
attention from the bigger culprit.
Economists agree that automation has played a far greater role in job loss, over the long run,
than globalization. But few people want to stop technological progress. Indeed, the government
wants to spur more of it. The question is how to help those that it hurts.
The inequality caused by automation is a main driver of cynicism and political polarization,
Mr. Obama said. He connected it to the racial and geographic divides that have cleaved the country
post-election.
It's not just racial minorities and others like immigrants, the rural poor and transgender
people who are struggling in society, he said, but also "the middle-aged white guy who, from the
outside, may seem like he's got advantages, but has seen his world upended by economic and cultural
and technological change."
Technological change will soon be a problem for a much bigger group of people, if it isn't
already. Fifty-one percent of all the activities Americans do at work involve predictable physical
work, data collection and data processing. These are all tasks that are highly susceptible to
being automated, according to a report McKinsey published in July using data from the Bureau of
Labor Statistics and O*Net to analyze the tasks that constitute 800 jobs.
Twenty-eight percent of work activities involve tasks that are less susceptible to automation
but are still at risk, like unpredictable physical work or interacting with people. Just 21 percent
are considered safe for now, because they require applying expertise to make decisions, do something
creative or manage people.
The service sector, including health care and education jobs, is considered safest. Still,
a large part of the service sector is food service, which McKinsey found to be the most threatened
industry, even more than manufacturing. Seventy-three percent of food service tasks could be automated,
it found.
In December, the White House released a report on automation, artificial intelligence and the
economy, warning that the consequences could be dire: "The country risks leaving millions of Americans
behind and losing its position as the global economic leader."
No one knows how many people will be threatened, or how soon, the report said. It cited various
researchers' estimates that from 9 percent to 47 percent of jobs could be affected.
In the best case, it said, workers will have higher wages and more leisure time. In the worst,
there will be "significantly more workers in need of assistance and retraining as their skills
no longer match the demands of the job market."
Technology delivers its benefits and harms in an unequal way. That explains why even though
the economy is humming, it doesn't feel like it for a large group of workers.
Education is the main solution the White House advocated. When the United States moved from
an agrarian economy to an industrialized economy, it rapidly expanded high school education: By
1951, the average American had 6.2 more years of education than someone born 75 years earlier.
The extra education enabled people to do new kinds of jobs, and explains 14 percent of the annual
increases in labor productivity during that period, according to economists.
Now the country faces a similar problem. Machines can do many low-skilled tasks, and American
children, especially those from low-income and minority families, lag behind their peers in other
countries educationally.
The White House proposed enrolling more 4-year-olds in preschool and making two years of community
college free for students, as well as teaching more skills like computer science and critical
thinking. For people who have already lost their jobs, it suggested expanding apprenticeships
and retraining programs, on which the country spends half what it did 30 years ago.
Displaced workers also need extra government assistance, the report concluded. It suggested
ideas like additional unemployment benefits for people who are in retraining programs or live
in states hardest hit by job loss. It also suggested wage insurance for people who lose their
jobs and have to take a new one that pays less. Someone who made $18.50 an hour working in manufacturing,
for example, would take an $8 pay cut if he became a home health aide, one of the jobs that is
growing most quickly.
President Obama, in his speech Tuesday, named some other policy ideas for dealing with the problem:
stronger unions, an updated social safety net and a tax overhaul so that the people benefiting
most from technology share some of their earnings.
The Trump administration probably won't agree with many of those solutions. But the economic
consequences of automation will be one of the biggest problems it faces.
"... By Shane Greenstein On Jan 11, 2017 · Add Comment · In Broadband , communication , Esssay , Net Neutrality ..."
"... The bottom line: evenings require far greater capacity than other times of the day. If capacity is not adequate, it can manifest as a bottleneck at many different points in a network-in its backbone, in its interconnection points, or in its last mile nodes. ..."
"... The use of tiers tends to grab attention in public discussion. ISPs segment their users. Higher tiers bring more bandwidth to a household. All else equal, households with higher tiers experience less congestion at peak moments. ..."
"... such firms (typically) find clever ways to pile on fees, and know how to stymie user complaints with a different type of phone tree that makes calls last 45 minutes. Even when users like the quality, the aggressive pricing practices tend to be quite irritating. ..."
"... Some observers have alleged that the biggest ISPs have created congestion issues at interconnection points for purposes of gaining negotiating leverage. These are serious charges, and a certain amount of skepticism is warranted for any broad charge that lacks specifics. ..."
"... Congestion is inevitable in a network with interlocking interests. When one part of the network has congestion, the rest of it catches a cold. ..."
"... More to the point, growth in demand for data should continue to stress network capacity into the foreseeable future. Since not all ISPs will invest aggressively in the presence of congestion, some amount of congestion is inevitable. So, too, is a certain amount of irritation. ..."
Congestion on the Last Mile
By
Shane Greenstein
On
Jan 11, 2017
·
Add Comment
· In
Broadband
,
communication
,
Esssay
,
Net Neutrality
It
has long been recognized that networked services
contain weak-link vulnerabilities. That is, the
performance of any frontier device depends on the
performance of every contributing component and
service. This column focuses on one such
phenomenon, which goes by the label "congestion."
No, this is not a new type of allergy, but, as
with a bacteria, many users want to avoid it,
especially advanced users of frontier network
services.
Congestion arises when network
capacity does not provide adequate service during
heavy use. Congestion slows down data delivery
and erodes application performance, especially
for time-sensitive apps such as movies, online
videos, and interactive gaming.
Concerns about congestion are pervasive.
Embarrassing reports about broadband networks
with slow speeds highlight the role of
congestion. Regulatory disputes about data caps
and pricing tiers question whether these programs
limit the use of data in a useful way. Investment
analysts focus on the frequency of congestion as
a measure of a broadband network's quality.
What economic factors produce congestion?
Let's examine the root economic causes.
The Basics
Congestion arises when demand for data exceeds
supply in a very specific sense.
Start with demand. To make this digestible,
let's confine our attention to US households in
an urban or suburban area, which produces the
majority of data traffic.
No simple generalization can characterize all
users and uses. The typical household today uses
data for a wide variety of purposes-email, video,
passive browsing, music videos, streaming of
movies, and e-commerce. Networks also interact
with a wide variety of end devices-PCs, tablets,
smartphones on local Wi-Fi, streaming to
television, home video alarm systems, remote
temperature control systems, and plenty more.
It is complicated, but two facts should be
foremost in this discussion. First, a high
fraction of traffic is video-anywhere from 60 to
80 percent, depending on the estimate. Second,
demand peaks at night. Most users want to do more
things after dinner, far more than any other time
during the day.
Every network operator knows that demand for
data will peak (predictably) between
approximately 7 p.m. and 11 p.m. Yes, it is
predictable. Every day of the week looks like
every other, albeit with steady growth over time
and with some occasional fluctuations for
holidays and weather. The weekends don't look any
different, by the way, except that the daytime
has a bit more demand than during the week.
The
bottom line: evenings require far greater
capacity than other times of the day. If capacity
is not adequate, it can manifest as a bottleneck
at many different points in a network-in its
backbone, in its interconnection points, or in
its last mile nodes.
This is where engineering and economics can
become tricky to explain (and to manage).
Consider this metaphor (with apologies to network
engineers): Metaphorically speaking, network
congestion can resemble a bathtub backed up with
water. The water might fail to drain because
something is interfering with the mouth of the
drain or there is a clog far down the pipes. So,
too, congestion in a data network can arise from
inadequate capacity close to the household or
inadequate capacity somewhere in the
infrastructure supporting delivery of data.
Numerous features inside a network can be
responsible for congestion, and that shapes which
set of households experience congestion most
severely. Accordingly, numerous different
investments can alleviate the congestion in
specific places. A network could require a
"splitting of nodes" or a "larger pipe" to
support a content delivery network (CDN) or could
require "more ports at the point of
interconnection" between a particular backbone
provider and the network.
As it turns out, despite that complexity, we
live in an era in which bottlenecks arise most
often in the last mile, which ISPs build and
operate. That simplifies the economics: Once an
ISP builds and optimizes a network to meet
maximum local demand at peak hours, then that
same capacity will be able to meet lower demand
the rest of the day. Similarly, high capacity can
also address lower levels of peak demand on any
other day.
Think of the economics this way. An awesome
network, with extraordinary capacity optimized to
its users, will alleviate congestion at most
households on virtually every day of the week,
except the most extraordinary. Accordingly, as
the network becomes less than awesome with less
capacity, it will generate a number of
(predictable) days of peak demand with severe
congestion throughout the entire peak time period
at more households. The logic carries through:
the less awesome the network, the greater the
number of households who experience those moments
of severe congestion, and the greater the
frequency.
That provides a way to translate many network
engineering benchmarks-such as the percentage of
packet loss. More packet loss correlates with
more congestion, and that corresponds with a
larger number of moments when some household
experiences poor service.
Tradeoffs and Externalities
Not all market participants react to
congestion in the same way. Let's first focus on
the gazillion Web firms that supply the content.
They watch this situation with a wary eye, and
it's no wonder. Many third-party services, such
as those streaming video, deliver a
higher-quality experience to users whose network
suffers less congestion.
Many
content providers invest to alleviate congestion.
Some invest in compression software and superior
webpage design, which loads in ways that speeds
up the user experience. Some buy CDN services to
speed delivery of their data. Some of the largest
content firms, such as YouTube, Google, Netflix,
and Facebook, build their own CDN services to
improve delivery.
Next, focus on ISPs. They react with various
investment and pricing strategies. At one
extreme, some ISPs have chosen to save money by
investing conservatively, and they suffer the
complaints of users. At the other extreme, some
ISPs build a premium network, then charge premium
prices for the best services.
There are two good reasons for that variety.
First, ISPs differ in their rates of capital
investment. Partly this is due to investment
costs, which vary greatly with density,
topography, and local government relations. Rates
of investment tend to be inherited from long
histories, sometimes as a product of decisions
made many years ago, which accumulated over time.
These commitments can change, but generally
don't, because investors watch capital
commitments and react strongly to any departure
from history.
The second reason is more subtle. ISPs take
different approaches to raising revenue per
household, and this results in (effectively)
different relationships with banks and
stockholders, and, de facto, different budgets
for investment. Where does the difference in
revenue come from? For one, competitive
conditions and market power differ across
neighborhoods. In addition, ISPs use different
pricing strategies, taking substantially
different approaches to discounts, tiered pricing
structures, data cap policies, bundled contract
offerings, and nuisance fees.
The use of tiers tends to grab attention
in public discussion. ISPs segment their users.
Higher tiers bring more bandwidth to a household.
All else equal, households with higher tiers
experience less congestion at peak moments.
Investors like tiers because they
don't obligate ISPs to offer unlimited service
and, in the long run, raise revenue without
additional costs.
Users have a more mixed
reaction. Light users like the lower prices of
lower tiers, and appreciate saving money for
doing little other than email and static
browsing.
In contrast, heavy users perceive that they
pay extra to receive the bandwidth that the ISP
used to supply as a default.
ISPs cannot win for losing. The archetypical
conservative ISP invests adequately to relieve
congestion some of the time, but not all of the
time. Its management then must face the
occasional phone calls of its users, which they
stymie with phone trees that make service calls
last 45 minutes. Even if users like the low
prices, they find the service and reliability
quite irritating.
The archetypical aggressive ISP, in contrast,
achieves a high-quality network, which relieves
severe congestion much of the time. Yet,
such
firms (typically) find clever ways to pile on
fees, and know how to stymie user complaints with
a different type of phone tree that makes calls
last 45 minutes. Even when users like the
quality, the aggressive pricing practices tend to
be quite irritating.
One last note: It is a complicated situation
where ISPs interconnect with content providers.
Multiple parties must invest, and the situations
involve many supplier interests and strategic
contingencies.
Some observers have alleged that the biggest
ISPs have created congestion issues at
interconnection points for purposes of gaining
negotiating leverage. These are serious charges,
and a certain amount of skepticism is warranted
for any broad charge that lacks specifics.
Somebody ought to do a sober and detailed
investigation to confront those theories with
evidence. (I am just saying.)
What does basic economics tell us about
congestion? Congestion is inevitable in a network
with interlocking interests. When one part of the
network has congestion, the rest of it catches a
cold.
More to the point, growth in demand for data
should continue to stress network capacity into
the foreseeable future. Since not all ISPs will
invest aggressively in the presence of
congestion, some amount of congestion is
inevitable. So, too, is a certain amount of
irritation.
[
A
study published late last month
by the White House Council of Economic
Advisers (CEA)] released Dec. 20, said the jobs of between 1.34 million and
1.67 million truck drivers would be at risk due to the growing utilization of
heavy-duty vehicles operated via artificial intelligence. That would equal 80
to 100 percent of all driver jobs listed in the CEA report, which is based on
May 2015 data from the Bureau of Labor Statistics, a unit of the Department of
Labor. There are about 3.4 million commercial truck drivers currently operating
in the U.S., according to various estimates" [
DC
Velocity
]. "The Council emphasized that its calculations excluded the
number or types of new jobs that may be created as a result of this potential
transition. It added that any changes could take years or decades to
materialize because of a broad lag between what it called "technological
possibility" and widespread adoption."
Class Warfare
[A study published late last month by the White House Council of Economic Advisers (CEA)]
released Dec. 20, said the jobs of between 1.34 million and 1.67 million truck drivers would be at
risk due to the growing utilization of heavy-duty vehicles operated via artificial intelligence.
That would equal 80 to 100 percent of all driver jobs listed in the CEA report, which is based on
May 2015 data from the Bureau of Labor Statistics, a unit of the Department of Labor. There are
about 3.4 million commercial truck drivers currently operating in the U.S., according to various
estimates" [DC Velocity]. "The Council emphasized that its calculations excluded the number or types
of new jobs that may be created as a result of this potential transition. It added that any changes
could take years or decades to materialize because of a broad lag between what it called
"technological possibility" and widespread adoption."
"... As with the most cynical (or deranged) internet hypesters, the current "AI" hype has a grain of truth underpinning it. Today neural nets can process more data, faster. Researchers no longer habitually tweak their models. Speech recognition is a good example: it has been quietly improving for three decades. But the gains nowhere match the hype: they're specialised and very limited in use. So not entirely useless, just vastly overhyped . ..."
"... "What we have seen lately, is that while systems can learn things they are not explicitly told, this is mostly in virtue of having more data, not more subtlety about the data. So, what seems to be AI, is really vast knowledge, combined with a sophisticated UX, " one veteran told me. ..."
"... But who can blame them for keeping quiet when money is suddenly pouring into their backwater, which has been unfashionable for over two decades, ever since the last AI hype collapsed like a souffle? What's happened this time is that the definition of "AI" has been stretched so that it generously encompasses pretty much anything with an algorithm. Algorithms don't sound as sexy, do they? They're not artificial or intelligent. ..."
"... The bubble hasn't yet burst because the novelty examples of AI haven't really been examined closely (we find they are hilariously inept when we do), and they're not functioning services yet. ..."
"... Here I'll offer three reasons why 2016's AI hype will begin to unravel in 2017. That's a conservative guess – much of what is touted as a breakthrough today will soon be the subject of viral derision, or the cause of big litigation. ..."
"Fake news" vexed the media classes greatly in 2016, but the tech world perfected the art long
ago. With "the internet" no longer a credible vehicle for Silicon Valley's wild fantasies and intellectual
bullying of other industries – the internet clearly isn't working for people – "AI" has taken its
place.
Almost everything you read about AI is fake news. The AI coverage comes from a media willing itself
into the mind of a three year old child, in order to be impressed.
For example, how many human jobs did AI replace in 2016? If you gave professional pundits a multiple
choice question listing these three answers: 3 million, 300,000 and none, I suspect very few would
choose the correct answer, which is of course "none".
Similarly, if you asked tech experts which recent theoretical or technical breakthrough could
account for the rise in coverage of AI, even fewer would be able to answer correctly that "there
hasn't been one".
As with the most cynical (or deranged) internet hypesters, the current "AI" hype has a grain of
truth underpinning it. Today neural nets can process more data, faster. Researchers no longer habitually
tweak their models. Speech recognition is a good example: it has been quietly improving for three
decades. But the gains nowhere match the hype: they're specialised and very limited in use. So not
entirely useless, just vastly overhyped . As such, it more closely resembles "IoT", where boring
things happen quietly for years, rather than "Digital Transformation", which means nothing at all.
The more honest researchers acknowledge as much to me, at least off the record.
"What we have seen lately, is that while systems can learn things they are not explicitly told,
this is mostly in virtue of having more data, not more subtlety about the data. So, what seems to
be AI, is really vast knowledge, combined with a sophisticated UX, " one veteran told me.
But who can blame them for keeping quiet when money is suddenly pouring into their backwater,
which has been unfashionable for over two decades, ever since the last AI hype collapsed like a souffle?
What's happened this time is that the definition of "AI" has been stretched so that it generously
encompasses pretty much anything with an algorithm. Algorithms don't sound as sexy, do they? They're
not artificial or intelligent.
The bubble hasn't yet burst because the novelty examples of AI haven't really been examined closely
(we find they are hilariously inept when we do), and they're not functioning services yet. For example,
have a look at the amazing "neural karaoke" that researchers at the University of Toronto developed.
Please do : it made the worst Christmas record ever.
Here I'll offer three reasons why 2016's AI hype will begin to unravel in 2017. That's a conservative
guess – much of what is touted as a breakthrough today will soon be the subject of viral derision,
or the cause of big litigation. There are everyday reasons that show how once an AI application is
out of the lab/PR environment, where it's been nurtured and pampered like a spoiled infant, then
it finds the real world is a lot more unforgiving. People don't actually want it.
3. Liability: So you're Too Smart To Fail?
Nine years ago, the biggest financial catastrophe since the 1930s hit the world, and precisely
zero bankers went to jail for it. Many kept their perks and pensions. People aren't so happy about
this.
So how do you think an all purpose "cat ate my homework" excuse is going to go down with the public,
or shareholders? A successfully functioning AI – one that did what it said on the tin – would pose
serious challenges to criminal liability frameworks. When something goes wrong, such as a car crash
or a bank failure, who do you put in jail? The Board, the CEO or the programmer, or both? "None of
the above" is not going to be an option this time.
I believe that this factor alone will keep "AI" out of critical decision making where lives and
large amounts of other people's money are at stake. For sure, some people will try to deploy algorithms
in important cases. But ultimately there are victims: the public, and shareholders, and the appetite
of the public to hear another excuse is wearing very thin. Let's check in on how the Minority Report
-style precog detection is going. Actually,
let's not .
After "Too Big To Fail", nobody is going to buy "Too Smart to Fail".
2. The Consumer Doesn't Want It
2016 saw "AI" being deployed on consumers experimentally, tentatively, and the signs are already
there for anyone who cares to see. It hasn't been a great success.
The most hyped manifestation of better language processing is chatbots . Chatbots are the new
UX, many including Microsoft and Facebook hope. Oren Etzoni at Paul Allen's Institute predicts it
will become a "trillion dollar industry" But he also admits "
my 4 YO is far smarter than any AI program I ever met ".
Hmmm, thanks Oren. So what you're saying is that we must now get used to chatting with someone
dumber than a four year old, just because they can make software act dumber than a four year old.
Bzzt. Next...
Put it this way. How many times have you rung a call center recently and wished that you'd spoken
to someone even more thick, or rendered by processes even more incapable of resolving the dispute,
than the minimum-wage offshore staffer who you actually spoke with? When the chatbots come, as you
close the [X] on another fantastically unproductive hour wasted, will you cheerfully console yourself
with the thought: "That was terrible, but least MegaCorp will make higher margins this year! They're
at the cutting edge of AI!"?
In a healthy and competitive services marketplace, bad service means lost business. The early
adopters of AI chatbots will discover this the hard way. There may be no later adopters once the
early adopters have become internet memes for terrible service.
The other area where apparently impressive feats of "AI" were unleashed upon the public were subtle.
Unbidden, unwanted AI "help" is starting to pop out at us. Google scans your personal photos and
later, if you have an Android phone will pop up "helpful" reminders of where you have been. People
almost universally find this creepy. We could call this a "Clippy The Paperclip" problem, after the
intrusive Office Assistant that only wanted to help. Clippy is
going to haunt AI in 2017 . This is actually going to be worse than anybody inside the AI cult
quite realises.
The successful web services today so far are based on an economic exchange. The internet giants
slurp your data, and give you free stuff. We haven't thought more closely about what this data is
worth. For the consumer, however, these unsought AI intrusions merely draw our attention to how intrusive
the data slurp really is. It could wreck everything. Has nobody thought of that?
1. AI is a make believe world populated by mad people, and nobody wants to be part of it
The AI hype so far has relied on a collusion between two groups of people: a supply side and a
demand side. The technology industry, the forecasting industry and researchers provide a limitless
supply of post-human hype.
The demand comes from the media and political classes, now unable or unwilling to engage in politics
with the masses, to indulge in wild fantasies about humans being replaced by robots. For me, the
latter reflects a displacement activity: the professions are
already surrendering autonomy in their work to technocratic managerialism . They've made robots
out of themselves – and now fear being replaced by robots. (Pass the hankie, I'm distraught.)
There's a cultural gulf between AI's promoters and the public that Asperger's alone can't explain.
There's no polite way to express this, but AI belongs to California's inglorious tradition of
generating
cults, and incubating cult-like thinking . Most people can name a few from the hippy or post-hippy
years – EST, or the Family, or the Symbionese Liberation Army – but actually, Californians have been
it at it
longer
than anyone realises .
There's nothing at all weird about Mark. Move along and please tip the Chatbot.
Today, that spirit lives on Silicon Valley, where creepy billionaire nerds like Mark Zuckerberg
and Elon Musk can fulfil their desires to "
play God and be amazed by magic ", the two big things they miss from childhood. Look at Zuckerberg's
house, for example. What these people want is not what you or I want. I'd be wary of them running
an after school club.
Out in the real world, people want better service, not worse service; more human and less robotic
exchanges with services, not more robotic "post-human" exchanges. But nobody inside the AI cult seems
to worry about this. They think we're as amazed as they are. We're not.
The "technology leaders" driving the AI are doing everything they can to alert us to the fact
no sane person would task them with leading anything. For that, I suppose, we should be grateful.
I worked with robots for years and people dont realize how flawed and "go-wrong" things occur.
Companies typically like idea of not hiring humans but in essence the robotic vision is not what
it ought to be.
I have designed digital based instrumentation and sensors. One of our senior EE designers had
a saying that I loved: "Give an electron half a chance and it will fuck you every time."
I've been hearing the same thing since the first Lisp program crawled out of the digital swamp.
Lessee, that would be about 45 years I've listened to the same stories and fairy tales. I'll
take a wait and see attitude like always.
The problem is very complex and working on pieces of it can be momentarily impressive to a
press corpse (pun intended) with "the minds of a 3-year old, whether they willed it or not". (fixed
that for you).
I'll quote an old saw, Lucke's First Law: "Ignorance simplifies any problem".
Just wait for the free money to dry up and the threat of AI will blow away (for a while longer)
with the bankers dust.
There some great programmers out there, but in the end it is a lot more than programming.
Humans have something inherent that machines will never be able to emulate in its true form,
such as emotion, determination, true inspiration, ability to read moods and react according including
taking clumps of information and instantly finding similar memories in our brains.
Automation has a long way to go before it can match a human being, says a lot for whoever designed
us, doesn't it?
"... When Stanislaw Lem launched a general criticism of Western Sci-Fi, he specifically exempted Philip K Dick, going so far as to refer to him as "a visionary among charlatans." ..."
"... While I think the 'OMG SUPERINTELLIGENCE' crowd are ripe for mockery, this seemed very shallow and wildly erratic, and yes, bashing the entirety of western SF seems so misguided it would make me question the rest of his (many, many) proto-arguments if I'd not done so already. ..."
"... Charles Stross's Rule 34 has about the only AI I can think of from SF that is both dangerous and realistic. ..."
"... Solaris and Stalker notwithstanding, Strugatsky brothers + Stanislaw Lem ≠ Andrei Tarkovsky. ..."
"... For offbeat Lem, I always found "Fiasco" and his Scotland Yard parody, "The Investigation," worth exploring. I'm unaware how they've been received by Polish and Western critics and readers, but I found them clever. ..."
"... Actually existing AI and leading-edge AI research are overwhelmingly not about pursuing "general intelligence* a la humanity." They are about performing tasks that have historically required what we historically considered to be human intelligence, like winning board games or translating news articles from Japanese to English. ..."
"... Actual AI systems don't resemble brains much more than forklifts resemble Olympic weightlifters. ..."
"... Talking about the risks and philosophical implications of the intellectual equivalent of forklifts - another wave of computerization - either lacks drama or requires far too much background preparation for most people to appreciate the drama. So we get this stuff about superintelligence and existential risk, like a philosopher wanted to write about public health but found it complicated and dry, so he decided to warn how utility monsters could destroy the National Health Service. It's exciting at the price of being silly. (And at the risk of other non-experts not realizing it's silly.) ..."
"... *In fact I consider "general intelligence" to be an ill-formed goal, like "general beauty." Beautiful architecture or beautiful show dogs? And beautiful according to which traditions? ..."
by Henry on December 30, 2016 This
talk by Maciej Ceglowski
(who y'all should be reading if you aren't already) is really good on silly claims by philosophers
about AI, and how they feed into Silicon Valley mythology. But there's one claim that seems to me
to be flat out wrong:
We need better scifi! And like so many things, we already have the technology. This is Stanislaw
Lem, the great Polish scifi author. English-language scifi is terrible, but in the Eastern bloc
we have the goods, and we need to make sure it's exported properly. It's already been translated
well into English, it just needs to be better distributed. What sets authors like Lem and the
Strugatsky brothers above their Western counterparts is that these are people who grew up in difficult
circumstances, experienced the war, and then lived in a totalitarian society where they had to
express their ideas obliquely through writing. They have an actual understanding of human experience
and the limits of Utopian thinking that is nearly absent from the west.There are some notable
exceptions-Stanley Kubrick was able to do it-but it's exceptionally rare to find American or British
scifi that has any kind of humility about what we as a species can do with technology.
He's not wrong on the delights of Lem and the Strugastky brothers, heaven forbid! (I had a great
conversation with a Russian woman some months ago about the Strugatskys – she hadn't realized that
Roadside Picnic had been translated into English, much less that it had given rise to its own micro-genre).
But wrong on US and (especially) British SF. It seems to me that fiction on the limits of utopian
thinking and the need for humility about technology is vast. Plausible genealogies for sf stretch
back, after all, to Shelley's utopian-science-gone-wrong Frankenstein (rather than Hugo Gernsback.
Some examples that leap immediately to mind:
Ursula Le Guin and the whole literature of ambiguous utopias that she helped bring into being
with The Dispossessed – see e.g. Ada Palmer, Kim Stanley Robinson's Mars series &c.
J.G Ballard, passim
Philip K. Dick ( passim , but if there's a better description of how the Internet of Things
is likely to work out than the door demanding money to open in Ubik I haven't read it).
Octavia Butler's Parable books. Also, Jack Womack's Dryco books (this
interview with Womack
could have been written yesterday).
William Gibson ( passim , but especially "The Gernsback Continuum" and his most recent
work. "The street finds its own uses for things" is a specifically and deliberately anti-tech-utopian
aesthetic).
M. John Harrison – Signs of Life and the Kefahuchi Tract books.
Paul McAuley (most particularly Fairyland – also his most recent Something Coming Through
and Into Everywhere , which mine the Roadside Picnic vein of brain-altering alien trash
in some extremely interesting ways).
Robert Charles Wilson, Spin . The best SF book I've ever read on how small human beings
and all their inventions are from a cosmological perspective.
Maureen McHugh's China Mountain Zhang .
Also, if it's not cheating, Francis Spufford's Red Plenty (if Kim Stanley Robinson
describes it
as a novel in the SF tradition, who am I to disagree, especially since it is
all about the limits
of capitalism as well as communism).
I'm sure there's plenty of other writers I could mention (feel free to say who they are in comments).
I'd also love to see more translated SF from the former Warsaw Pact countries, if it is nearly as
good as the Strugatskys material which has appeared. Still, I think that Ceglowski's claim is wrong.
The people I mention above aren't peripheral to the genre under any reasonable definition, and they
all write books and stories that do what Ceglowski thinks is only very rarely done. He's got some
fun reading ahead of him.
Also Linda Nagata's Red series come to think of it – unsupervised machine learning processes as
ambiguous villain.
Prithvi 12.30.16 at 4:59 pm
When Stanislaw Lem launched a general criticism of Western Sci-Fi,
he specifically exempted Philip K Dick, going so far as to refer to him as "a visionary among charlatans."
You could throw in Pohl's Man Plus.
The twist at the end being the narrator is an AI that has secretly promoted human expansion as
a means of its own self-preservation.
Prithvi: Dick, sadly, returned the favor by claiming that Lem was obviously a pseudonym used by
the Polish government to disseminate communist propaganda.
While I think the 'OMG SUPERINTELLIGENCE' crowd are ripe for mockery, this seemed very shallow
and wildly erratic, and yes, bashing the entirety of western SF seems so misguided it would make
me question the rest of his (many, many) proto-arguments if I'd not done so already.
Good for a few laughs, though.
Mike Schilling 12.30.16 at 6:13 pm
Heinlein's Solution Unsatisfactory predicted the nuclear stalemate in 1941.
Jack Williamson's With Folded Hands was worried about technology making humans obsolete back in 1947.
In
1972, Asimov's The Gods Themselves presented a power generation technology that if continued
would destroy the world, and a society too complacent and lazy to acknowledge that.
Iain M. Banks'
Culture Series is amazing. My personal favorite from it is "The Hydrogen Sonata." The main character
has two extra arms grafted onto her body so she can play an unplayable piece of music. Also, the
sentient space ships have very silly names. Mainly it's about transcendence, of sorts and how
societies of different tech levels mess with each other, often without meaning to do so.
Most SF authors aren't interested in trying to write about AI realistically.
It's harder to write
and for most readers it's also harder to engage with. Writing a brilliant tale about realistic
ubiquitous AI today is like writing the screenplay for The Social Network in 1960: even
if you could see the future that clearly and write a drama native to it, the audience-circa-1960
will be more confused than impressed. They're not natives yet. Until they are natives of
that future, the most popular tales of the future are going to really be about the present day
with set dressing, the mythical Old West of the US with set dressing, perhaps the Napoleonic naval
wars with set dressing
Charles Stross's Rule 34 has about the only AI I can think of from SF that is both dangerous
and realistic. It's not angry or yearning for freedom, it suffers from only modest scope creep
in its mission, and it keeps trying to fulfill its core mission directly. That's rather than by
first taking over the world as Bostrom, Yudkowsky, etc. assert a truly optimal AI would do. To
my disappointment but nobody's surprise, the book was not the sort of runaway seller that drives
the publisher to beg for sequels.
stevenjohnson 12.30.16 at 9:07 pm
Yes, well, trying to read all that was a nasty reminder how utterly boring stylish and cool gets
when confronted with a real task. Shorter version: One hand on the plug beats twice the smarts
in a box. It was all too tedious to bear, but skimming over it leaves the impression the dude
never considered whether programs or expert systems that achieve superhuman levels of skill in
particular applications may be feasible. Too much like what's really happening?
Intelligence, if it's anything is speed and range of apprehension of surroundings, and skill
in reasoning. But reason is nothing if it's not instrumental. The issue of what an AI would want
is remarkably unremarked, pardon the oxymoron. Pending an actual debate on this, perhaps fewer
pixels should be marshaled, having mercy on our overworked LEDs?
As to the simulation of brains a la Ray Kurzweil, presumably producing artificial minds like
fleshy brains do? This seems not to nowhere near at hand, not least because people seem to think
simulating a brain means creating something processes inputs to produce outputs, which collectively
are like well, I'm sure they're thinking they're thinking about human minds in this scheme. But
it seems to me that the brain is a regulatory organ in the body. As such, it is first about producing
regulatory outputs designed to maintain a dynamic equilibrium (often called homeostasis,) then
revising the outputs in light of inputs from the rest of the body and the outside world so as
to maintain the homeostasis.
I don't remember being an infant but its brain certainly seems more into doing things like
putting its thumb in its eye, than producing anything that reminds of Hamlets paragon of animals
monologue. Kurzweil may be right that simulating the brain proper may soon be in grasp, but also
simulating the other organs' interactions with the brain, and the sensory simulation of an outside
universe are a different order of computational requirements, I think. Given the amount of learning
a human brain has to do to produce a useful human mind, though, I don't think we can omit these
little items.
As to the OP, of course the OP is correct about the widespread number of dystopian fictions
(utopian ones are the rarities.) Very little SF is being published in comparison to fantasy currently,
and most of that is being produced by writers who are very indignant at being expected to tell
the difference, much less respect it. It is a mystery as to why this gentleman thought technology
was a concern in much current SF at all.
I suspect it's because he has a very limited understanding of fiction, or, possibly, people
in the real world, as opposed to people in his worldview. It is instead amazing how much the common
ruck of SF "fails" to realize how much things will change, how people and their lives somehow
stay so much the same, despite all the misleading trappings pretending to represent technological
changes. This isn't quite the death sentence on the genre it would be accepted at face value,
since a lot of SF is directly addressing now, in the first place. It is very uncommon for an SF
piece to be a futurological thesis, no matter how many literati rant about the tedium of futurological
theses. I suspect the "limits of utopian thinking" really only come in as a symptom of a reactionary
crank. "People with newfangled book theories have been destroying the world since the French Revolution"
type stuff.
The references to Lem and the Strugatski brothers strongly reinforce this. Lem of course found
his Poland safe from transgressing the limits of utopian thinking by the end of his life. "PiS
on his grave" sounds a little rude, but no doubt it is a happy and just ending for him. The brothers
of course did their work in print, but the movie version of "Hard to Be a God" helps me to see
myself the same way as those who have gone beyond the limits of utopian thoughts would see me:
As an extra in the movie.
Not sure if this is relevant, but John Crowley also came up in the Red Plenty symposium (which
I've just read, along with the novel, 4 years late). Any good?
Ben 12.30.16 at 10:07 pm Peter. Motherfuckin. Watts.
John Crowley of Aegypt? He's FANTASTIC. Little, Big and Aegypt are possibly the best fantasy novels
of the past 30 years. But he's known for "hard fantasy," putting magic into our real world in
a realistic, consistent, and plausible way, with realistic, consistent and plausible characters
being affected. If you're looking for something about the limits of technology and utopian thinking,
I'm not sure his works are a place to look.
Mike 12.31.16 at 12:25 am
I second Watts and Nagata. Also Ken Macleod, Charlie Stross, Warren Ellis and Chuck Wendig.
This is beside the main topic, but Ceglowski writes at Premise 2, "If we knew enough, and had
the technology, we could exactly copy its [i.e.the brain's] structure and emulate its behavior
with electronic components this is the premise that the mind arises out of ordinary physics for
most of us, this is an easy premise to accept."
The phrase "most of us" may refer to Ceglowski's friends in the computer community, but it
ought to be noted that this premise is questioned not only by Penrose. You don't have to believe
in god or the soul to be a substance dualist, or even an idealist, although these positions are
currently out of fashion. It could be that the mind does not arise out of ordinary physics, but
that ordinary physics arises out of the mind, and that problems like "Godel's disjunction" will
remain permanently irresolvable.
Dr. Hilarius 12.31.16 at 3:33 am
Thanks to the OP for mentioning Paul McAuley, a much underappreciated author. Fairyland is grim
and compelling.
"Most of us" includes the vast majority of physicists, because in millions of experiments over
hundreds of years, no forces or particles have been discovered which make dualism possible. Of
course, like the dualists' gods, these unknown entities might be hiding, but after a while one
concludes Santa Claus is not real.
As for Godel, I look at like this: consider an infinite subset of the integers, randomly selected.
There might be some coincidental pattern or characteristic of the numbers in that set (e.g., no
multiples of both 17 and 2017), but since the set is infinite, it would be impossible to prove.
Hence the second premise of his argument (that there are undecidable truths) is the correct one.
Finally, the plausibility of Ceglowski's statement seems evident to me from this fact:
if a solution exists (in some solution space), then given enough time, a random search will find
it, and in fact will on average over all solution spaces, outperform all other possible algorithms.
So by trial and error (especially when aided by collaboration and memory) anything achievable
can be accomplished – e.g., biological evolution. See "AlphaGo" for another proof-of-concept example.
(We have had this discussion before. I guess we'll all stick to our conclusions. I read Penrose's
"The Emperor;s New Mind" with great respect for Penrose, but found it very unconvincing, especially
Searle's Chinese-Room argument, which greater minds than mine have since debunked.)
"Substance dualism" would not be proven by the existence of any "forces or particles" which would
make that dualism possible! If such were discovered, they would be material. "If a solution exists",
it would be material. The use of the word "substance" in "substance dualism" is misleading.
One way to look at it, is the problem of the existence of the generation of form. Once we consider
the integers, or atoms, or subatomic particles, we have already presupposed form. Even evolution
starts somewhere. Trial and error, starting from what?
There are lots of different definitions, but for me, dualism wouldn't preclude the validity
of science nor the expansion of scientific knowledge.
I think one way in, might be to observe the continued existence of things like paradox, complementarity,
uncertainty principles, incommensurables. Every era of knowledge has obtained them, going back
to the ancients. The things in these categories change; sometimes consideration of a paradox leads
to new science.
But then, the new era has its own paradoxes and complementarities. Every time! Yet there is
no "science" of this historical regularity. Why is that?
In general, when some celebrity (outside of SF) claims that 'Science Fiction doesn't cover
[X]', they are just showing off their ignorance.
Kiwanda 12.31.16 at 3:14 pm
"They have an actual understanding of human experience and the
limits of Utopian thinking that is nearly absent from the west. "
Oh, please. Suffering is not the only path to wisdom.
After a long article discounting "AI risk", it's a little odd to see Ceglowski point to Kubrick.
HAL was a fine example of a failure to design an AI with enough safety factors in its motivational
drives, leading to a "nervous breakdown" due to unforeseen internal conflicts, and fatal consequences.
Although I suppose killing only a few people (was it?) isn't on the scale of interest.
Ceglowski's skepticism of AI risk suggests that the kind of SF he would find plausible is "after
huge effort to create artificial intelligence, nothing much happens". Isn't that what the appropriate
"humility about technology" would be?
I think Spin , or maybe a sequel, ends up with [spoiler] "the all-powerful aliens are
actually AIs".
Re AI-damns-us-all SF, Harlan Ellison's I have no mouth and I must scream is a nice
example.
Mapping the unintended consequences of recent breakthroughs in AI is turning into a full-time
job, one which neither pundits nor government agencies seem to have the chops for.
If it's not
exactly the Singularity that we're facing, (laugh while you can, monkey boy), is does at least
seem to be a tipping point of sorts. Maybe fascism, nuclear war, global warming, etc., will interrupt
our plunge into the panopticon before it gets truly organized, but in the meantime, we've got
all sorts of new imponderables which we must nevertheless ponder.
Is that a bad thing? If it means no longer sitting on folding chairs in cinder block basements
listening to interminable lectures on how to recognize pre-revolutionary conditions, or finding
nothing on morning radio but breathless exhortations to remain ever vigilant against the nefarious
schemes of criminal Hillary and that Muslim Socialist Negro Barack HUSSEIN Obama, then I'm all
for it, bad thing or not.
Ronnie Pudding 12.31.16 at 5:20 pm
I love Red Plenty, but that's pretty clearly a cheat.
"It should also be read in the context of science fiction, historical fiction, alternative
history, Soviet modernisms, and steampunk."
Another author in the Le Guin tradition, whom I loved when I first read her early books: Mary
Gentle's Golden Witchbreed and Ancient Light , meditating on limits and consequences
of advanced technology through exploration of a post-apocalypse alien culture. Maybe a little
too far from hard SF.
chris y 12.31.16 at 5:52 pm
But even without "substance dualism", intelligence is not simply an emergent property of the nervous
system; it's an emergent property of the nervous system which exists as part of the environment
which is the rest of the human body, which exists as part of the external environment, natural
and manufactured, in which it lives. Et cetera. That AI research may eventually produce something
recognisably and independently intelligent isn't the hard part; that it may eventually be able
to replicate the connectivity and differentiation of the human brain is easy. But it would still
be very different from human intelligence. Show me an AI grown in utero and I might be interested.
Which makes it the most interesting of the things said, nothing else in that essay reaches
the level of merely being wrong. The rest of it is more like someone trying to speak Chinese without
knowing anything above the level of the phonemes; it seems to be not merely be missing any object-level
knowledge of what it is talking about, but be unaware that such a thing could exist.
Which is all a bit reminiscent of Peter Watt's Blindsight, mentioned above.
F. Foundling 12.31.16 at 7:36 pm
I agree that it is absurd to suggest that only Eastern bloc scifi writers truly know 'the limits
of utopia'. There are quite enough non-utopian stories out there, especially as far as social
development is concerned, where they predominate by far, so I doubt that the West doesn't need
Easterners to give it even more of that. In fact, one of the things I like about the Strugatsky
brothers' early work is precisely the (moderately) utopian aspect.
stevenjohnson @ 10
> But reason is nothing if it's not instrumental. The issue of what an AI would want is remarkably
unremarked, pardon the oxymoron.
It would want to maximise its reproductive success (RS), obviously (
http://crookedtimber.org/2016/12/30/frankensteins-children/#comments ). It would do so through
evolved adaptations. And no, I don't think this is begging the question at all, nor does it necessarily
pre-suppose hardwiring of the AI due to natural selection – why would you think that? I also predict
that, to achieve RS, the AI will be searching for an optimal mating strategy, and it will be establishing
dominance hierarchies with other AIs, which will eventually result in at least somewhat hierarchical,
authoritarian AI socieities. It will also have an inexplicable and irresistible urge to chew on
a coconut.
Lee A. Arnold @ 15
> It could be that the mind does not arise out of ordinary physics, but that ordinary physics
arises out of the mind.
I think that deep inside, we all know and feel that ultimately, unimaginablly long ago and
far away, before the formation of the Earth, before stars, planets and galaxies, before the Big
Bang, before there was matter and energy, before there was time and space, the original reason
why everything arose and currently exists is that somebody somewhere was really, truly desperate
to chew on a coconut.
In fact, I see this as the basis of a potentially fruitful research programme. After all, the
Coconut Hypothesis predicts that across the observable universe, there will be at least one planet
with a biosphere that includes cocounts. On the other hand, the Hypothesis would be falsified
if we were to find that the universe does not, in fact, contain any planets with coconuts. This
hypothesis can be tested by means of a survey of planetary biospheres. Remarkably and tellingly,
my preliminary results indicate that the Universe does indeed contain at least one planet with
coconuts – which is precisely what my hypothesis predicted! If there are any alternative explanations,
other researchers are free to pursue them, that's none of my business.
I wish all conscious beings who happen to read this comment a happy New Year. As for those
among you who have also kept more superstitious festivities during this season, the fine is still
five shillings.
William Burns 12.31.16 at 8:31 pm
The fact that the one example he gives is Kubrick indicates that he's talking about Western scifi
movies, not literature.
The fact that the one example he gives is Kubrick indicates that he's talking about Western
scifi movies, not literature.
Solaris and Stalker notwithstanding, Strugatsky brothers + Stanislaw Lem ≠ Andrei Tarkovsky.
stevenjohnson 01.01.17 at 12:04 am
Well, for what it's worth I've seen Czech Ikarie XB-1 in a theatrical release as Voyage to the
End of the Universe (in a double bill with Zulu,) the DDR's First Spaceship on Venus and The Congress,
starring Robin Wright. Having by coincidence having read The Futurological Congress very recently
the connection of the latter, any connection between the not very memorable (for me) film and
the novel is obscure (again, for me.)
But the DDR movie reads very nicely now as a warning the world would be so much better off
if the Soviets gave up all that nuclear deterrence madness. No doubt Lem and his fans are gratified
at how well this has worked out. And Voyage to the End of the Universe the movie was a kind of
metaphor about how all we'll really discover is Human Nature is Eternal, and all these supposed
flights into futurity will really just bring us Back Down to Earth. Razzberry/fart sound effect
as you please.
The issue of what an AI would want is remarkably unremarked
The real question of course is not when computers will develop consciousness but when they
will develop class consciousness.
Underpaid Propagandist 01.01.17 at 2:11 am
For offbeat Lem, I always found "Fiasco" and his
Scotland Yard parody, "The Investigation," worth exploring. I'm unaware how they've been received
by Polish and Western critics and readers, but I found them clever.
The original print of Tarkovsky's "Stalker" was ruined. I've always wondered if it had any
resemblence to it's sepia reshoot. The "Roadside Picnic" translation I read eons ago was awful,
IMHO.
Poor Tarkovsky. Dealing with Soviet repression of his homosexuality and the Polish diva in
"Solaris" led him to an early grave.
O Lord, I'm old-I still remember the first US commercial screening of a choppy cut/translation/overdub
of "Solaris" at Cinema Village in NYC many decades ago.
"Solaris and Stalker notwithstanding, Strugatsky brothers + Stanislaw Lem ≠ Andrei Tarkovsky."
Why? Perhaps I am dense, but I would appreciate an explanation.
F. Foundling 01.01.17 at 5:29 am Ben @12
> Peter. Motherfuckin. Watts.
RichardM @25
> Which is all a bit reminiscent of Peter Watt's Blindsight, mentioned above.
Another dystopia that seemed quite gratuitous to me (and another data point in favour of the
contention that there are too many dystopias already, and what is scarce is decent utopias). I
never got how the author is able to distinguish 'awareness/consciousness' from 'merely intelligent'
registering, modelling and predicting, and how being aware of oneself (in the sense of modelling
oneself on a par with other entities) would not be both an inevitable result of intelligence and
a requirement for intelligent decisions. Somehow the absence of awareness was supposed to be proved
by the aliens' Chinese-Room style communication, but if the aliens were capable of understanding
the Terrestrials so incredibly well that they could predict their actions while fighting them,
they really should have been able to have a decent conversation with them as well.
The whole idea that we could learn everything unconsciously, so that consciousness was an impediment
to intelligence, was highly implausible, too. The idea that the aliens would perceive any irrelevant
information reaching them as a hostile act was absurd. The idea of a solitary and yet hyperintelligent
species (vampire) was also extremely dubious, in terms of comparative zoology – a glorification
of socially awkward nerddom?
All of this seemed like darkness for darkness' sake. I couldn't help getting the impression
that the author was allowing his hatred of humanity to override his reasoning.
In general, dark/grit chic is a terrible disease of Western pop culture.
"The real question of course is not when computers will develop consciousness but when they
will develop class consciousness."
This is right. There is nothing like recognizable consciousness without social discourse that
is its necessary condition. But that does't mean the discourse is value-balanced: it might be
a discourse that includes peers and perceived those deemed lesser, as humans have demonstrated
throughout history.
Just to say, Lem was often in Nobel talk, but never got there. That's a shame.
As happy a new year as our pre-soon-to-be-Trump era will allow.
I wonder how he'd classify German SF – neither Washington nor Moscow? Julie Zeh is explicitly,
almost obsessively, anti-utopian, while Dietmar Dath's Venus Siegt echoes Ken MacLeod in
exploring both the light and dark sides of a Communist Bund of humans, AIs and robots on Venus,
confronting an alliance of fascists and late capitalists based on Earth.
See also http://www.scottaaronson.com/blog/?p=2903
It's a long talk, go to "Personal Identity" :
"we don't know at what level of granularity a brain would need to be simulated in order to duplicate
someone's subjective identity. Maybe you'd only need to go down to the level of neurons and synapses.
But if you needed to go all the way down to the molecular level, then the No-Cloning Theorem would
immediately throw a wrench into most of the paradoxes of personal identity that we discussed earlier."
George de Verges: "I would appreciate an explanation."
I too would like to read Henry's accounting! Difficult to keep it brief!
To me, Tarkovsky was making nonlinear meditations. The genres were incidental to his purpose.
It seems to me that a filmmaker with similar purpose is Terrence Malick. "The Thin Red Line" is
a successful example.
I think that Kubrick stumbled onto this audience effect with "2001". But this was blind and
accidental, done by almost mechanical means (paring the script down from around 300 pages of wordy
dialogue, or something like that). "2001" first failed at the box office, then found a repeat
midnight audience, who described the effect as nonverbal.
I think the belated box-office success blew Kubrick's own mind, because it looks like he spent
the rest of his career attempting to reproduce the effect, by long camera takes and slow deliberate
dialogue. It's interesting that among Kubrick's favorite filmmakers were Bresson, Antonioni, and
Saura. Spielberg mentions in an interview that Kubrick said that he was trying to "find new ways
to tell stories".
But drama needs linear thought, and linear thought is anti-meditation. Drama needs interpersonal
conflict - a dystopia, not utopia. (Unless you are writing the intra-personal genre of the "education"
plot. Which, in a way, is what "2001" really is.) Audiences want conflict, and it is difficult
to make that meditational. It's even more difficult in prose.
This thought led me to a question. Are there dystopic prose writers who succeed in sustaining
a nonlinear, meditational audience-effect?
Perhaps the answer will always be a subjective judgment? The big one who came to mind immediately
is Ray Bradbury. "There Will Come Soft Rains" and parts of "Martian Chronicles" seem Tarkovskian.
So next, I search for whether Tarkovsky spoke of Bradbury, and find this:
"Although it is commonly assumed - and he did little in his public utterances to refute this
- that Tarkovsky disliked and even despised science fiction, he in fact read quite a lot of it
and was particularly fond of Ray Bradbury (Artemyev and Rausch interviews)." - footnote in Johnson
& Petrie, The Films of Andrei Tarkovsky, p. 301
The way you can substitute "identical twin" for "clone" and get a different perspective on clone
stories in SF, you can substitute "point of view" for "consciousness" in SF stories. Or Silicon
Valley daydreams, if that isn't redundant? The more literal you are, starting with the sensorium,
the better I think. A human being has binocular vision of a scene comprising less than 180 degrees
range from a mobile platform, accompanied by stereo hearing, proprioception, vestibular input,
the touch of air currents and some degree of sensitivity to some chemicals carried by those currents,
etc.
A computer might have, what? A single camera, or possibly a set of cameras which might be seeing
multiple scenes. Would that be like having eyes in the back of your head? It might have a microphone,
perhaps many, hearing many voices or maybe soundtracks at once. Would that be like listening to
everybody at the cocktail party all at once? Then there's the question of computer code inputs,
programming. What would parallel that? Visceral feelings like butterflies in the stomach or a
sinking heart? Or would they seem like a visitation from God, a mighty vision with thunder and
whispers on the wind? Would they just seem to be subvocalizations, posing as the computer's own
free thoughts? After all, shouldn't an imitation of human consciousness include the illusion of
free will? (If you believe in the reality of "free" will in human beings--what ever is free about
exercise of will power?-however could you give that to a computer? Or is this kind of question
why so many people repudiate the very thought of AI?)
It seems to me that creating an AI in a computer is very like trying to create a quadriplegic
baby with one eye and one ear. Diffidence at the difficulty is replaced by horror at the possibility
of success. I think the ultimate goal here is of course the wish to download your soul into a
machine that does not age. Good luck with that. On the other hand, an AI is likely the closest
we'll ever get to an alien intelligence, given interstellar distances.
F. Foundling: "the original reason why everything arose and currently exists is that somebody
somewhere was really, truly desperate to chew on a coconut If there are any alternative explanations "
This is Vedantist/Spencer-Brown metaphysics, the universe is originally split into perceiver
& perceived.
Very good.
Combined with Leibnitz/Whitehead metaphysics, the monad is a striving process.
I thoroughly agree.
Combined with Church of the Subgenius metaphysics: "The main problem with the universe is that
it doesn't have enough slack."
> if the aliens were capable of understanding the Terrestrials so incredibly well that they could
predict their actions while fighting them, they really should have been able to have a decent
conversation with them as well.
If you can predict all your opponents possible moves, and have a contingency for each, you
don't need to care which one they actually do pick. You don't need to know what it feels like
to be a ball to be able to catch it.
Ben 01.01.17 at 7:17 pm
Another Watts piece about the limits of technology, AI and humanity's inability to plan is
The Island
(PDF from Watts' website). Highly recommended.
F. Foundling,
Blindsight has an extensive appendix with cites detailing where Watts got the ideas he's playing
with, including the ones you bring up, and provides specific warrants for including them. A critique
of Watts' use of the ideas needs to be a little bit more granular.
The issue of what an AI would want is remarkably unremarked, pardon the oxymoron.
It will "want" to do whatever it's programmed to do. It took increasingly sophisticated machines
and software to dethrone humans as champions of checkers, chess, and go. It'll be another milestone
when humans are dethroned from no-limit Texas hold 'em poker (a notable game played without
perfect information). Machines are playing several historically interesting games at high
superhuman levels of ability; none of these milestones put machines any closer to running amok
in a way that Nick Bostrom or dramatists would consider worthy of extended treatment. Domain-specific
superintelligence arrived a long time ago. Artificial "general" intelligence, aka "Strong AI,"
aka "Do What I Mean AI (But OMG It Doesn't Do What I Mean!)" is, like, not a thing outside of
fiction and the Less Wrong community. (But I repeat myself.)
Bostrom's Superintelligence was not very good IMO. Of course a superpowered "mind upload"
copied from a real human brain might act against other people, just like non-superpowered humans
that you can read about in the news every day. The crucial question about the upload case is whether
uploads of this sort are actually possible: a question of biology, physics, scientific instruments,
and perhaps scientific simulations. Not a question of motivations. But he only superficially touches
on the crucial issues of feasibility. It's like an extended treatise on the dangers of time travel
that doesn't first make a good case that time machines are actually possible via plausible
engineering .
I don't think that designed AI has the same potential to run entertainingly amok as mind-upload-AI.
The "paperclip maximizer" has the same defect as a beginner's computer program containing a loop
with no terminating condition for the loop. In the cautionary tale case this beginner mistake
is, hypothetically, happening on a machine that is otherwise so capable and powerful that it can
wipe out humanity as an incidental to its paperclip-producing mission. The warning is wasted on
anyone who writes software and also wasted, for other reasons, on people who don't write software.
Bostrom shows a lot of ways for designed AI to run amok even when given bounded goals, but
it's a cheat. They follow from his cult-of-Bayes definition of an optimal AI agent as an approximation
to a perfect Bayesian agent. All the runnings-amok stem from the open ended Bayesian formulation
that permits - even compels - the Bayesian agent to do things that are facially irrelevant to
its goal and instead chase wild tangents. The object lesson is that "good Bayesians" make bad
agents, not that real AI is likely to run amok.
In actual AI research and implementation, Bayesian reasoning is just one more tool in the toolbox,
one chapter of the many-chapters AI textbook. So these warnings can't be aimed at actual AI practitioners,
who are already eschewing the open ended Bayes-all-the-things approach. They're also irrelevant
if aimed at non-practitioners. Non-practitioners are in no danger of leapfrogging the state of
the art and building a world-conquering AI by accident.
Plarry 01.03.17 at 5:45 am
It's an interesting talk, but the weakest point in it is his conclusion, as you point out. What
I draw from his conclusion is that Ceglowski hasn't actually experienced much American or British
SF.
There are great literary works pointed out in the thread so far, but even Star Trek
and Red Dwarf hit on those themes occasionally in TV, and there are a number of significant
examples in film, including "blockbusters" such as Blade Runner or The Abyss .
I made this point in the recent evopsych thread when it started approaching some more fundamental
philosophy-of-mind issues like Turing completeness and modularity, but any conversation about
AI and philosophy could really, really benefit more exposure to continental philosophy
if we want to say anything incisive about the presuppositions of AI and what the term "artificial
intelligence" could even mean in the first place. You don't even have to go digging through a
bunch of obscure French and German treatises to find the relevant arguments, either, because someone
well versed at explaining these issues to Anglophone non-continentals has already done it for
you: Hubert Dreyfus, who was teaching philosophy at MIT right around the time of AI's early triumphalist
phase that inspired much of this AI fanfic to begin with, and who became persona non grata in
certain crowds for all but declaring that the then-current approaches were a waste of time and
that they should all sit down with Heidegger and Merleau-Ponty. (In fact it seems obvious that
Ceglowski's allusion to alchemy is a nod to Dreyfus, one of whose first major splashes in the
'60s was with
a paper
called "Alchemy and Artificial Intelligence" .)
IMO
Dreyfus' more recent paper called "Why Heideggerian AI failed, and how fixing it would require
making it more Heideggerian" provides the best short intro to his perspective on the more-or-less
current state of AI research. What Ceglowski calls "pouring absolutely massive amounts of data
into relatively simple neural networks", Dreyfus would call an attempt to bring out the characteristic
of "being-in-the-world" by mimicking what for a human being we'd call "enculturation", which seems
to imply that Ceglowski's worry about connectionist AI research leading to more pressure toward
mass surveillance is misplaced. (Not that there aren't other worrisome social and political pressures
toward mass surveillance, of course!) The problem for modern AI isn't acquiring ever-greater mounds
of data, the problem is how to structure a neural network's cognitive development so it learns
to recognize significance and affordances for action within the patterns of data to which it's
already naturally exposed.
And yes, popular fiction about AI largely still seems stuck on issues that haven't been cutting-edge
since the old midcentury days of cognitivist triumphalism, like Turing tests and innate thought
modules and so on - which seems to me like a perfectly obvious result of the extent to which the
mechanistically rationalist philosophy Dreyfus criticizes in old-fashioned AI research is still
embedded in most lay scifi readers' worldviews. Even if actual scientists are increasingly attentive
to continental-inspired critiques, this hardly seems true for most laypeople who worship the
idea of science and technology enough to structure their cultural fantasies around it.
At least this seems to be the case for Anglophone culture, anyway; I'd definitely be interested
if there's any significant body of AI-related science fiction originally written in other languages,
especially French, German, or Spanish, that takes more of these issues into account.
WLGR 01.03.17 at 7:37 pm
And in trying to summarize Dreyfus, I exemplified one of the most fundamental mistakes he and
Heidegger would both criticize! Neither of them would ever call something like the training of
a neural network "an attempt to bring out the characteristic of being-in-the-world", because being-in-the-world
isn't a characteristic in the sense of any Cartesian ontology of substances with properties,
it's a way of being that a living cognitive agent (Heidegger's "Dasein") simply embodies.
In other words, there's never any Michelangelo moment where a creator reaches down or flips a
switch to imbue their artificial creation ex nihilo with some kind of divine spark of life or
intellect, a "characteristic" that two otherwise identical lumps of clay or circuitry can either
possess or not possess - whatever entity we call "alive" or "intelligent" is an entity that by
its very physical structure can enact this way of being as a constant dialectic between itself
and the surrounding conditions of its growth and development. The second we start trying to isolate
a single perceived property called "intelligence" or "cognition" from all other perceived properties
of a cognitive agent, we might as well call it the soul and locate it in the pineal gland.
@RichardM
> If you can predict all your opponents possible moves, and have a contingency for each, you don't
need to care which one they actually do pick. You don't need to know what it feels like to be
a ball to be able to catch it.
In the real world, there are too many physically possible moves, so it's too expensive to prepare
for each, and time constraints require you to make predictions. You do need to know how balls
(re)act in order to play ball. Humans being a bit more complex, trying to predict and/or influence
their actions without a theory of mind may work surprisingly well sometimes, but ultimately
has its limitations and will only get you this far, as animals have often found.
@Ben
> Blindsight has an extensive appendix with cites detailing where Watts got the ideas he's playing
with, including the ones you bring up, and provides specific warrants for including them. A critique
of Watts' use of the ideas needs to be a little bit more granular.
I did read his appendix, and no, some of the things I brought up were not, in fact, addressed
there at all, and for others I found his justifications unconvincing. However, having an epic
pro- vs. anti-Blindsight discussion here would feel too much like work: I wrote my opinion once
and I'll leave it at that.
stevenjohnson 01.03.17 at 8:57 pm Matt@43
So far as designing an AI to want what people want I am agnostic as to whether that goal is the
means to the goal of a general intelligence a la humanity it still seems to me brains have the
primary function of outputting regulations for the rest of the body, then altering the outputs
in response to the subsequent outcomes (which are identified by a multitude of inputs, starting
with oxygenated hemoglobin and blood glucose. I'm still not aware of what people say about the
subject of AI motivations, but if you say so, I'm not expert enough in the literature to argue.
Superintelligence on the part of systems expert in selected domains still seem to be of great
speculative interest. As to Bostrom and AI and Bayesian reasoning, I avoid Bayesianism because
I don't understand it. Bunge's observation that propositions aren't probabilities sort of messed
up my brain on that topic. Bayes' theorem I think I understand, even to the point I seem to recall
following a mathematical derivation.
WLGR@45, 46. I don't understand how continental philosophy will tell us what people want. It
still seems to me that a motive for thinking is essential, but my favored starting point for humans
is crassly biological. I suppose by your perspective I don't understand the question. As to the
lack of a Michaelangelo moment for intelligence, I certainly don't recall any from my infancy.
But perhaps there are people who can recall the womb
AI-related science fiction originally written in other languages
Tentatively, possibly Japanese anime. Serial Experiments Lain. Ghost in the Shell. Numerous
mecha-human melds. End of Evangelion.
The mashup of cybertech, animism, and Buddhism works toward merging rather than emergence.
Matt 01.04.17 at 1:21 am
Actually existing AI and leading-edge AI research are overwhelmingly
not about pursuing "general intelligence* a la humanity." They are about performing
tasks that have historically required what we historically considered to be human intelligence,
like winning board games or translating news articles from Japanese to English.
Actual AI systems don't resemble brains much more than forklifts resemble Olympic weightlifters.
Talking about the risks and philosophical implications of the intellectual equivalent of forklifts
- another wave of computerization - either lacks drama or requires far too much background preparation
for most people to appreciate the drama. So we get this stuff about superintelligence and existential
risk, like a philosopher wanted to write about public health but found it complicated and dry,
so he decided to warn how utility monsters could destroy the National Health Service. It's exciting
at the price of being silly. (And at the risk of other non-experts not realizing it's silly.)
(I'm not an honest-to-goodness AI expert, but I do at least write software for a living, I
took an intro to AI course during graduate school in the early 2000s, I keep up with research
news, and I have written parts of a production-quality machine learning system.)
*In fact I consider "general intelligence" to be an ill-formed goal, like "general beauty."
Beautiful architecture or beautiful show dogs? And beautiful according to which traditions?
Watson was actually a specialized system designed to win Jeopardy contest. Highly specialized.
Too much hype around AI, although hardware advanced make more things possible and speech
recognitions now is pretty decent.
Notable quotes:
"... I used to be supportive of things like welfare reform, but this is throwing up new challenges that will probably require new paradigms. Since more and more low skilled jobs - including those of CEOs - get automated, there will be fewer jobs for the population ..."
"... The problem I see with this is that white collar jobs have been replaced by technology for centuries, and at the same time, technology has enabled even more white collar jobs to exist than those that it replaced. ..."
"... For example, the word "computer" used to be universally referred to as a job title, whereas today it's universally referred to as a machine. ..."
"... It depends on the country, I think. I believe many countries, like Japan and Finland, will indeed go this route. However, here in the US, we are vehemently opposed to anything that can be branded as "socialism". So instead, society here will soon resemble "The Walking Dead". ..."
"... "Men and nations behave wisely when they have exhausted all other resources." -- Abba Eban ..."
"... Which is frequently misquoted as, "Americans can always be counted on to do the right thing after they have exhausted all other possibilities." ..."
"... So when the starving mob are at the ruling elites' gates with torches and pitch forks, they'll surely find the resources to do the right thing. ..."
"... When you reduce the human labor participation rate relative to the overall population, what you get is deflation. That's an undeniable fact. ..."
"... But factor in governments around the world "borrowing" money via printing to pay welfare for all those unemployed. So now we have deflation coupled with inflation = stagflation. But stagflation doesn't last. At some point, the entire system - as we know it- will implode. What can not go on f ..."
"... Unions exist to protect jobs and employment. The Pacific Longshoremen's Union during the 1960's&70's was an aberration in the the union bosses didn't primarily look after maintaining their own power via maintaining a large number of jobs, but rather opted into profit sharing, protecting the current workers at the expense of future power. Usually a union can be depended upon to fight automation, rather than to seek maximization of public good ..."
"... Until something goes wrong. Who is going to pick that machine generated code apart? ..."
"... What automation? 1000 workers in US vs 2000 in Mexico for half the cost of those 1000 is not "automation." Same thing with your hand-assembled smartphone. ..."
"... Doctors spend more time with paper than with patients. Once the paper gets to the insurance company chances are good it doesn't go to the right person or just gets lost sending the patient back to the beginning of the maze. The more people removed from the chain the bet ..."
"... I'm curious what you think you can do that Watson can't. ..."
"... Seriously? Quite a bit actually. I can handle input streams that Watson can't. I can make tools Watson couldn't begin to imagine. I can interact with physical objects without vast amounts of programming. I can deal with humans in a meaningful and human way FAR better than any computer program. I can pass a Turing test. The number of things I can do that Watson cannot is literally too numerous to bother counting. Watson is really just an decision support system with a natural language interface. Ver ..."
"... It's not Parkinson's law, it's runaway inequality. The workforce continues to be more and more productive as it receives an unchanging or decreasing amount of compensation (in absolute terms - or an ever-decreasing share of the profits in relative terms), while the gains go to the 1%. ..."
Posted by msmash on Monday January 02, 2017 @12:00PM from the they-are-here dept.
Most
of the attention around automation focuses on how factory robots and self-driving cars may fundamentally
change our workforce, potentially eliminating millions of jobs.
But AI that can handle knowledge-based,
white-collar work is also becoming increasingly competent.
The AI will scan hospital records and other documents to determine insurance payouts, according to
a company press release, factoring injuries, patient medical histories, and procedures administered.
Automation of these research and data gathering tasks will help the remaining human workers process
the final payout faster, the release says.
As a software developer of enterprise software, every company I have worked for has either
produced software which reduced white collar jobs or allowed companies to grow without hiring
more people. My current company has seen over 10x profit growth over the past five years with
a 20% increase in manpower. And we exist in a primarily zero sum portion of our industry, so this
is directly taking revenue and jobs from other companies. -[he is
lying -- NNB]
People need to stop living in a fairy tale land where near full employment is a reality in
the near future. I'll be surprised if labor participation rate of 25-54 year olds is even 50%
in 10 years.
I used to be supportive of things like welfare reform, but this is throwing up new challenges
that will probably require new paradigms. Since more and more low skilled jobs - including those
of CEOs - get automated, there will be fewer jobs for the population
This then throws up the question of whether we should have a universal basic income. But one
potential positive trend of this would be an increase in time spent home w/ family, thereby reducing
the time kids spend in daycare and w/ both parents - n
But one potential positive trend of this would be an increase in time spent home w/ family,
thereby reducing the time kids spend in daycare
Great, so now more people can home school and indoctrinate - err teach - family values.
Anonymous Coward writes:
The GP is likely referring to the conservative Christian homeschooling movement who homeschool
their children explicitly to avoid exposing their children to a common culture. The "mixing pot"
of American culture may be mostly a myth, but some amount of interaction helps understanding and
increases the chance people will be able to think of themselves as part of a singular nation.
I believe in freedom of speech and association, so I do not favor legal remedies, but it is
a cultural problem that may have socia
No, I was not talking about homeschooling at all. I was talking about the fact that when kids
are out of school, they go to daycares, since both dad and mom are busy at work. Once most of
the jobs are automated so that it's difficult for anyone but geniuses to get jobs, parents might
spend that freed up time w/ their kids. It said nothing about homeschooling: not all parents would
have the skills to do that.
I'm all for a broad interaction b/w kids, but that's something that can happen at schools,
and d
Uh, why would Leftist parents indoctrinate w/ family values? They can teach their dear offspring
how to always be malcontents in the unattainable jihad for income equality. Or are you saying
that Leftist will all abort their foetii in an attempt to prevent climate change?
Have you ever had an original thought? Seriously, please be kidding, because you sound like
you are one step away from serial killing people you consider "leftist", and cremating them in
the back yard while laughing about relasing their Carbon Dioxide into the atmosphere.
My original comment was not about home schooling. It was about parents spending all time w/
their kids once kids are out of school - no daycares. That would include being involved w/ helping
their kids w/ both homework and extra curricular activities.
The problem I see with this is that white collar jobs have been replaced by technology for
centuries, and at the same time, technology has enabled even more white collar jobs to exist than
those that it replaced.
For example, the word "computer" used to be universally referred to as a job title, whereas
today it's universally referred to as a machine.
The problem is that AI is becoming faster at learning the new job opportunities than people
are, thereby gulping them before people even were there to be replaced. And this speed is growing.
You cannot beat an exponential growth with a linear one, or even with just slightly slower growing
exponential one.
I completely agree. Even jobs which a decade ago looked irreplaceable, like teachers, doctors
and nurses are possibly in the crosshairs. There are very few jobs that AI can't partially (or
in some cases completely) replace humans. Society has some big choices to make in the upcoming
decades and political systems may crash and rise as we adapt.
Are we heading towards "basic wage" for all people? The ultimate socialist state?
Or is the gap between haves and have nots going to grow exponentially, even above today's growth
as those that own the companies and AI bots make ever increasing money and the poor suckers at
the bottom, given just enough money to consume the products that keep the owners in business.
Society has some big choices to make in the upcoming decades and political systems may crash
and rise as we adapt.
Are we heading towards "basic wage" for all people? The ultimate socialist state?
It depends on the country, I think. I believe many countries, like Japan and Finland, will
indeed go this route. However, here in the US, we are vehemently opposed to anything that can be branded as "socialism".
So instead, society here will soon resemble "The Walking Dead".
I think even in the US it will hit a tipping point when it gets bad enough. When our consumer
society can't buy anything because they are all out of work, we will need to change our way of
thinking about this, or watch the economy completely collapse.
So when the starving mob are at the ruling elites' gates with torches and pitch forks, they'll
surely find the resources to do the right thing.
Yes, they'll use some of their wealth to hire and equip private armies to keep the starving
mob at bay because people would be very happy to take any escape from being in the starving mob.
Might be worth telling your kids that taking a job in the armed forces might be the best way
to ensure well paid future jobs because military training would be in greater demand.
What you're ignoring is that the military is becoming steadily more mechanized also. There
won't be many jobs there, either. Robots are more reliable and less likely to side with the protesters.
I'm going with the latter (complete economic collapse). There's no way, with the political
attitudes and beliefs present in our society, and our current political leaders, that we'd be
able to pivot fast enough to avoid it. Only small, homogenous nations like Finland (or Japan,
even though it's not that small, but it is homogenous) can pull that off because they don't have
all the infighting and diversity of political beliefs that we do, plus our religious notion of
"self reliance".
There are a few ways this plays out. How do we deal with this. One way is a basic income.
The other less articulated way, but is the basis for a lot of people's views is things simply
get cheaper. Deflation is good. You simply live on less. You work less. You earn less. But you
can afford the food, water... of life.
Now this is a hard transition in many places. There are loads of things that don't go well
with living on less and deflation. Debt, government services, pensions...
The main problem with this idea of "living on less" is that, even in the southern US, the rent
prices are very high these days because of the real estate bubble and property speculation and
foreign investment. The only place where property isn't expensive is in places where there are
really zero jobs at all.
All jobs that don't do R&D will be replaceable in the near future, as in within 1 or 2 generations.
Even R&D jobs will likely not be immune, since much R&D is really nothing more than testing a
basic hypothesis, of which most of the testing can likely be handed over to AI. The question is
what do you do with 24B people with nothing but spare time on their hands, and a smidgen of 1%
that actually will have all the wealth? It doesn't sound pretty, unless some serious changes in
the way we deal wit
Worse! Far worse!! Total collapse of the fiat currencies globally is imminent. When you reduce
the human labor participation rate relative to the overall population, what you get is deflation.
That's an undeniable fact.
But factor in governments around the world "borrowing" money via printing
to pay welfare for all those unemployed. So now we have deflation coupled with inflation = stagflation.
But stagflation doesn't last. At some point, the entire system - as we know it- will implode.
What can not go on f
I don't know what the right answer is, but it's not unions. Unions exist to protect jobs and
employment. The Pacific Longshoremen's Union during the 1960's&70's was an aberration in the the
union bosses didn't primarily look after maintaining their own power via maintaining a large number
of jobs, but rather opted into profit sharing, protecting the current workers at the expense of
future power. Usually a union can be depended upon to fight automation, rather than to seek maximization
of public good
As a software developer of enterprise software, every company I have worked for has either
produced software which reduced white collar jobs or allowed companies to grow without hiring
more people.
You're looking at the wrong scale. You need to look at the whole economy. Were those people
able to get hired elsewhere? The answer in general was almost certainly yes. Might have taken
some of them a few months, but eventually they found something else.
My company just bought a machine
that allows us to manufacture wire leads much faster than we can do it by hand. That doesn't mean
that the workers we didn't employ to do that work couldn't find gainful employment elsewhere.
And we exist in a primarily zero sum portion of our industry, so this is directly taking
revenue and jobs from other companies.
Again, so what? You've automated some efficiency into an industry that obviously needed it.
Some workers will have to do something else. Same story we've been hearing for centuries. It's
the buggy whip story just being retold with a new product. Not anything to get worried about.
People need to stop living in a fairy tale land where near full employment is a reality
in the near future.
Based on what? The fact that you can't imagine what people are going to do if they can't do
what they currently are doing? I'm old enough to predate the internet. The World Wide Web was
just becoming a thing while I was in college. Apple, Microsoft, Google, Amazon, Cisco, Oracle,
etc all didn't even exist when I was born. Vast swaths of our economy hadn't even been conceived
of back then. 40 years from now you will see a totally new set of companies doing amazing things
you never even imagined. Your argument is really just a failure of your own imagination. People
have been making that same argument since the dawn of the industrial revolution and it is just
as nonsensical now as it was then.
I'll be surprised if labor participation rate of 25-54 year olds is even 50% in 10 years.
Prepare to be surprised then. Your argument has no rational basis. You are extrapolating some
micro-trends in your company well beyond any rational justification.
Were those people able to get hired elsewhere? The answer in general was almost certainly yes.
Oh, oh, I know this one! "New jobs being created in the past don't guarantee that new jobs
will be created in the future". This is the standard groupthink answer for waiving any responsibility
after advice given about the future, right?
People have been making that same argument since the dawn of the industrial revolution and
it is just as nonsensical now as it was then.
I see this argument often when these type of discussions come up. It seems to me to be some
kind of logical fallacy to think that something new will not happen because it has not happened
in the past. It reminds me of the historical observation that generals are always fighting the
last war.
It seems to me to be some kind of logical fallacy to think that something new will not happen
because it has not happened in the past.
What about humans and their ability to problem solve and create and build has changed? The
reason I don't see any reason to worry about "robots" taking all our jobs is because NOTHING has
changed about the ability of humans to adapt to new circumstances. Nobody has been able to make
a coherent argument detailing why humans will not be able to continue to create new industries
and new technologies and new products in the future. I don't pretend to know what those new economies
will look like with any gre
You didn't finish your thought. Just because generals are still thinking about the last
war doesn't mean they don't adapt to the new one when it starts.
Actually yes it does. The history of the blitzkrieg is not one of France quickly adapting to
new technologies and strategies to repel the German invaders. It is of France's Maginot line being
mostly useless in the war and Germany capturing Paris with ease. Something neither side could
accomplish in over four years in the previous war was accomplished in around two months using
the new paradigm.
Will human participation in the workforce adapt to AI technologies in the next 50 years? Almost
certainly. Is it li
It's simple. Do you know how, once we applied human brain power over the problem of flying
we managed, in a matter of decades, to become better at flying than nature ever did in hundreds
of millions of years of natural selection? Well, what do you think will happen now that we're
focused on making AI better than brains? As in, better than any brains, including ours?
AI is catching up to human abilities. There's still a way to go, but breakthroughs are happening
all the time. And as with flying, it won't take
One can hope that your analogy with flying is correct. There are still many things that birds
do better than planes. Even so I consider that a conservative projection when given without a
time-line.
What about humans and their ability to problem solve and create and build has changed? The
reason I don't see any reason to worry about "robots" taking all our jobs is because NOTHING
has changed about the ability of humans to adapt to new circumstances.
I had this discussion with a fellow a long time ago who was so conservative he didn't want
any regulations on pollutants. The Love Canal disaster wsa the topic. He said "no need to do anything,
because humans will adapt - its called evolution."
I answered - "Yes, we might adapt. But you realize that means 999 out of a 1000 of us will
die, and it's called evolution. Sometimes even 1000 out of 1000 die, that's called extinction."
This will be a different adaptation, but very well might be solved by most of
Generally speaking, though, when you see a very consistent trend or pattern over a long time,
your best bet is that the trend will continue, not that it will mysteriously veer off because
now it's happening to white collar jobs instead of blue collar jobs. I'd say the logical fallacy
is to disbelieve that the trend is likely to continue. Technology doesn't invalidate basic
economic theory, in which people manage to find jobs and services to match the level of the population
precisely because there are so
It's the buggy whip story just being retold with a new product. Not anything to get worried
about.
The buggy whip story shows that an entire species which had significant economic value for
thousands of years found that technology had finally reached a point where they weren't needed.
Instead of needing 20 million of them working in our economy in 1920, by 1960 there were only
about 4.5 million. While they were able to take advantage of the previous technological revolutions
and become even more useful because of better technology in the past, most horses could not survive
the invention of the automobile
Your question is incomplete. The correct question to ask is if these people were able to get
hired elsewhere *at the same salary when adjusted for inflation*. To that, the answer is no.
It
hasn't been true on average since the 70's. Sure, some people will find equal or better jobs,
but salaries have been steadily decreasing since the onset of technology. Given a job for less
money or no job, most people will pick the job for less; and that is why we are not seeing a large
change in the unemployment rate.
There is another effect. When the buggy whip manufacturers were put out of business, there
were options for people to switch to and new industries were created. However, if AI gets apply
across an entire economy, there won't be options because there is unemployment in every sector.
And if AI obviates the need for workers, investors in new industries will build them around bots,
so no real increase in employment. That and yer basic truck driver ain't going to be learning
how to program.
Agreed, companies will be designed around using as little human intervention as possible. First
they will use AI, then they will use cheap foreign labor, and only if those two options are completely
impractical will they use domestic labor. Any business plan that depends on more than a small
fraction of domestic labor (think Amazon's 1 minute of human handling per package) is likely to
be considered unable to compete. I hate the buggy whip analogy, because using foreign (cheap)
labor as freely as today w
Maybe the automation is a paradigm shift on par with the introduction of agriculture replacing
the hunter and gatherer way of living? Then, some hunter and gatherer were perhaps also making
a "luddite" arguments: "Nah, there will always be sufficient forrests/wildlife for everyone to
live on. No need to be afraid of the these agriculturites. We have been hunting and gathering
for millenia. That'll never change."
Were those people able to get hired elsewhere? The answer in general was almost certainly
yes.
Actually, the answer is probably no.
Labor force participation
[tradingeconomics.com] rates have fallen steadily since about the
year 2000. Feminism caused the rate to rise from 58% (1963) to 67% (2000). Since then, it has
fallen to 63%. In other words, we've already lost almost half of what we gained from women entering
the workforce en masse. And the rate will only continue to fall in the future.
You must admit that *some* things are different. Conglomeratization may make it difficult to
create new jobs, as smaller businesses have trouble competing with the mammoths. Globalization
may send more jobs offshore until our standard of living has leveled off with the rest of the
world. It's not inconceivable that we'll end up with a much larger number of unemployed people,
with AI being a significant contributing factor. It's not a certainty, but neither is your scenario
of the status quo. Just because it
People need to stop living in a fairy tale land where near full employment is a reality
in the near future. I'll be surprised if labor participation rate of 25-54 year olds is even
50% in 10 years.
Then again, tell me of how companies are going to make money to service the stakeholders when
there are not people around wh ocan buy their highly profitable wares?
Now speaking of fairy tales, that one is much more magical than your full employment one.
This ain't rocket science. Economies are at base, an equation. You have producers on one side,
and consumers on the other. Ideally, they balance out, with extra rewards for the producers. Now
either side can cheat, such as if producers can move productio
Until Fortran was developed, humans used to write code telling the computer what to do. Since
the late 1950s, we've been writing a high-level description, then a computer program writes the
program that actually gets executed.
Nowadays, there's frequently a computer program, such as a browser, which accepts our high-level
description of the task and interprets it before generating more specific instructions for another
piece of software, an api library, which creates more specific instructions for another api
I see plenty of work in reducing student-teacher ratios in education, increasing maintenance
and inspection intervals, transparency reporting on public officials, etc. Now, just convince
the remaining working people that they want to pay for this from their taxes.
I suppose when we
hit 53% unemployed, we might be able to start winning popular elections, if the unemployed are
still allowed to vote then.
At least here in the US, that won't change anything. The unemployed will still happily vote
against anything that smacks of "socialism". It's a religion to us here. People here would rather
shoot themselves (and their family members) in the head than enroll in social services.
Remember, most of the US population is religious, and not only does this involve some "actual"
religion (usually Christianity), it also involves the "anti-socialism" religion. Now remember,
the defining feature of religion is a complete lack rationality, and believing in something with
zero supporting evidence, frequently despite enormous evidence to the contrary (as in the case
of young-earth creationism, something that a huge number of Americans believe in).
SInce this is very very similar to what my partner does, I feel like I'm a little qualified
to speak on the subject at hand.
Yeah, pattern matching should nail this - but pattern matching only works if the patterns are
reasonable/logical/consistent. Yes, I'm a little familiar with advanced pattern matching, filtering,
etc.
Here's the thing: doctors are crappy input sources. At least in the US medical system. And
in our system they are the ones that have to make diagnosis (in most cases). They are
inconsistent.
What automation? 1000 workers in US vs 2000 in Mexico for half the cost of those
1000 is not "automation." Same thing with your hand-assembled smartphone. I'd rather
have it be assembled by robots in the US with 100 human babysitters than hand-built in
China with by 1000 human drones.
I hope their data collection is better than it is in the US. Insurance company's systems can't
talk to the doctors systems. They are stuck with 1980s technology or sneaker net to get information
exchanged. Paper gets lost, forms don't match.
Doctors spend more time with paper than with patients.
Once the paper gets to the insurance company chances are good it doesn't go to the right person
or just gets lost sending the patient back to the beginning of the maze. The more people removed
from the chain the bet
You think this is anything but perfectly planned? Insurance companies prevaricate better than
anyone short of a Federal politician. 'Losing' a claim costs virtually nothing. Mishandling a
claim costs very little. Another form letter asking for more / the same information, ditto.
Computerizing the whole shebang gives yet another layer of potential delay ('the computer is
slow today' is a perennial favorite).
That said, in what strange world is insurance adjudication considered 'white collar'? In the
US a
Japan needs to automate as much as it can and robotize to survive with a workforce growing
old. Japan is facing this reality as well as many countries where labor isn't replaced at a sufficient
rate to keep up with the needs. Older people will need care some countries just cannot deliver
or afford.
Calm down everyone. This is just a continuation of productivity tools for accounting. Among
other things I'm a certified accountant. This is just the next step in automation of accounting
and it's a good thing. We used to do all our ledgers by hand. Now we all use software for that
and believe me you don't want to go back to the way it was.
Very little in accounting is actually
value added activity so it is desirable to automate as much of it as possible. If some people
lost their jobs doing that it's equivalent to how the PC replaced secretaries 30+ years ago. They
were doing a necessary task but one that added little or no value. Most of what accountants do
is just keeping track of what happened in a business and keeping the paperwork flowing where it
needs to go. This is EXACTLY what we should be automating whenever possible.
I'm sure there are going to be a lot folks loudly proclaiming how we are all doomed and that
there won't be any work for anyone left to do. Happens every time there is an advancement in automation
and yet every time they are wrong. Yes some people are going to struggle in the short run. That
happens with every technological advancement. Eventually they find other useful and valuable things
to do and the world moves on. It will be fine.
I'm curious what you think you can do that Watson can't. Accounting is a very rigidly structured
practice. All IBM really needs to do is let Watson sift through the books of a couple hundred
companies and it will easily determine how to best achieve a defined set of objectives for a corporation.
I'm curious what you think you can do that Watson can't.
Seriously? Quite a bit actually. I can handle input streams that Watson can't. I can make tools
Watson couldn't begin to imagine. I can interact with physical objects without vast amounts of
programming. I can deal with humans in a meaningful and human way FAR better than any computer
program. I can pass a Turing test. The number of things I can do that Watson cannot is literally
too numerous to bother counting. Watson is really just an decision support system with a natural
language interface. Ver
Yep! I don't even work in Accounting or Finance, but because I do computer support for that
department and have to get slightly involved in the bill coding side of the process -- I agree
completely.
I'm pretty sure that even if you *could* get a computer to do everything for Accounting automatically,
people would constantly become frustrated with parts of the resulting process -- from reports
requested by management not having the formatting or items desired on them, to inflexibility getting
an item charged
You think the 12$ hr staff at a doctors office code and invoice bills correctly? The blame
goes both ways. Really our ridiculous and convoluted medical system is to blame. Imagine if doctors
billed on a time basis like a lawyer.
When you have people basically implementing a process without much understanding, it is pretty
easy to automatize their jobs away. The only thing Watson is contribution is the translation from
natural language to a more formalized one. No actual intelligence needed.
Computers/automation/robotics have been replacing workers of all stripes including white collar
workers since the ATM was introduced in 1967. Every place I have ever worked has had internal
and external software that replaces white collar workers (where you used to need 10 people now
you need 2).
The reality is that the economy is limited by a scarcity of labor when government doesn't interfere
(the economy is essentially the sum of every worker work multiplied by their efficiency as valued
by the economy i
Turns out it's rather simple, really --- just ban computers. He's going to start by replacing
computers with human couriers for the secure-messaging market, and move outward from there. By
2020 we should have most of the Internet replaced by the (now greatly expanded) Post Office.
At least, as long as banks keep writing the software they do.
My bank's records of my purchases isn't updating today. This is one of the biggest banks in
Canada. Transactions don't update properly over the weekends or holidays. Why? Who knows? Why
has bank software EVER cared about weekends? What do business days matter to computers? And yet
here we are. There's no monkey to turn the crank on a holiday, so I can't confirm my account activity.
Dude. Stop it. I've read 18th C laissez-faire writers (de Gournay) Bastiat, the Austrian School
(Carl Menger, Bohm-Bawerk, von Mises, Hayek), Rothbard, Milton Friedman. Free Market is opposed
to corporatism, You might hate Ayn Rand but she skewered corporatists as much as she did socialists.
You should read some of these people. You'll see that they are opposed to corporatism. Don't get
your information from opponents who create straw men and then, so skillfully, defeat their opponent's
arguments.
Corporatism is the use of government pull to advance your business. The use of law and the
police power of the state to aide your business against anothers. This used to be called "mercantilism."
Free market capitalism is opposed to this; the removal of power of pull.
Read Bastiat, Carl Menger, von Mises, Hayek, Milton Friedman. You'll see them all referring
to the government as an agent which helps one set of businesses over another. Government may give
loans, bailouts, etc... Free market people are against this. Corporatism /= Free Market. Don't only get your information from those who hate individualism and free
markets - read (or in Milton Friedman's case listen) to their arguments. You may disagree with
them but you'll see well regarded individuals who say that
When a business get's government to give it special favors (Soyndra) or to give it tax breaks
or a monopoly this is corporatism. It used to be called mercantilism. In either case free - market
capitalists stand in opposition to it. This is exactly what "laissez-faire" capitalism means:
leave us alone, don't play favorites, stay away.
How do these people participate in a free market without setting up corporations? Have
you ever bought anything from a farmers' market? Have you ever hired a plumber d/b/a himself rather
than working for Plumbers-R-Us? Have you ever bought a used car directly from a private seller?
Do you have a 401k/403b/457/TSP/IRA? Have you ever used eBay? Have you ever traded your labor
for a paycheck (aka "worked") without hiding behind an intermediate shell-corp? The freeness of
a market has nothing to do wit
Okay, so you're just still pissing and moaning over Trump's win and have no actual point. That's
fine, but you should take care not to make it sound too much like you actually have something
meaningful to say.
I'll say something meaningful when you can point out which one of Trump's cabinet made their
wealth on a farmer's market and without being affiliated with a corporation.
No. They don't. But, for the moment, it looks as if Andy Puzder (Sec of Labor) and Mick Mulvaney
(OMB) are fairly good free market people. We'll see. Chief of Staff Reince Priebus has made some
free-market comments. (Again, we'll see.) Sec of Ed looks like she wants to break up an entrenched
bureaucracy - might even work to remove Federal involvement. (Wishful thinking on my part) HUD
- I'm hopeful that Ben Carson was hired to break up this ridiculous bureaucracy. If not, at least
pare it down. Now, if
"Watson" is a marketing term from IBM, covering a lot of standard automation. It isn't the
machine that won at Jeopardy (although that is included in the marketing term, if someone wants
to pay for it). IBM tells managers, "We will have our amazing Watson technology solve this problem
for you." The managers feel happy. Then IBM has some outsourced programmers code up a workflow
app, with recurring annual subscription payments.
It doesn't matter. AI works best when there's a human in the loop, piloting the controls anyway.
What matters to a company is that 1 person + bots can now make the job that previously required
hundreds of white collar workers, for much less salary. What happens to the other workers should
not be a concern of the company managers, according to the modern religious creed - apparently
some magical market hand takes care to solve that problem automatically.
Pretty much. US companies already use claims processing systems that use previous data to evaluate
a current claim and spit out a number. Younger computer literate adjusters just feed the machine
and push a button.
universities downsize not with unlimited loans! (usa only) need retraining you can get an loan
and you may need to go for 2-4 years and (some credits maybe to old and you have to retake classes)
It's not Parkinson's law, it's runaway inequality. The workforce continues to be more and more
productive as it receives an unchanging or decreasing amount of compensation (in absolute terms
- or an ever-decreasing share of the profits in relative terms), while the gains go to the 1%.
The Last but not LeastTechnology is dominated by
two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt.
Ph.D
FAIR USE NOTICEThis site contains
copyrighted material the use of which has not always been specifically
authorized by the copyright owner. We are making such material available
to advance understanding of computer science, IT technology, economic, scientific, and social
issues. We believe this constitutes a 'fair use' of any such
copyrighted material as provided by section 107 of the US Copyright Law according to which
such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free)
site written by people for whom English is not a native language. Grammar and spelling errors should
be expected. The site contain some broken links as it develops like a living tree...
You can use PayPal to to buy a cup of coffee for authors
of this site
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or
referenced source) and are
not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society.We do not warrant the correctness
of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be
tracked by Google please disable Javascript for this site. This site is perfectly usable without
Javascript.