Best Binary Options Trading Simulator | Put and Call

GE2020: The Roar of the Swing Voter

Hi everyone, this is my first ever post here.
I run a little website called The Thought Experiment where I talk about various issues, some of them Singapore related. And one of my main interests is Singaporean politics. With the GE2020 election results, I thought I should pen down my take on what us as the electorate were trying to say.
If you like what I wrote, I also wrote another article on the state of play for GE2020 during the campaigning period, as well as 2 other articles related to GE2015 back when it was taking place.
If you don't like what I wrote, that's ok! I think the beauty of freedom of expression is that everyone is entitled to their opinion. I'm always happy to get feedback, because I do think that more public discourse about our local politics helps us to be more politically aware as a whole.
Just thought I'll share my article here to see what you guys make of it :D
Article Starts Here:
During the campaigning period, both sides sought to portray an extreme scenario of what would happen if voters did not vote for them. The Peoples’ Action Party (PAP) warned that Singaporeans that their political opponents “might eventually replace the government after July 10”. Meanwhile, the Worker’s Party (WP) stated that “there was a real risk of a wipeout of elected opposition MPs at the July 10 polls”.
Today is July 11th. As we all know, neither of these scenarios came to pass. The PAP comfortably retained its super-majority in Parliament, winning 83 out of 93 elected MP seats. But just as in GE2011, another Group Representation Constituency (GRC) has fallen to the WP. In addition, the PAP saw its vote share drop drastically, down almost 9% to 61.2% from 69.9% in GE2015.
Singapore’s electorate is unique in that a significant proportion is comprised of swing voters: Voters who don’t hold any blind allegiance to any political party, but vote based on a variety of factors both micro and macro. The above extreme scenarios were clearly targeted at these swing voters. Well, the swing voters have made their choice, their roar sending 4 more elected opposition MPs into Parliament. This article aims to unpack that roar and what it means for the state of Singaporean politics going forward.
1. The PAP is still the preferred party to form Singapore’s Government
Yes, this may come across as blindingly obvious, but it still needs to be said. The swing voter is by its very definition, liable to changes of opinion. And a large factor that determines how a swing voter votes is their perception of how their fellow swing voters are voting. If swing voters perceive that most swing voters are leaning towards voting for the opposition, they might feel compelled to vote for the incumbent. And if the reverse is true, swing voters might feel the need to shore up opposition support.
Why is this so? This is because the swing voter is trying to push the vote result into a sweet spot – one that lies between the two extreme scenarios espoused by either side. They don’t want the PAP to sweep all 93 seats in a ‘white tsunami’. Neither do they want the opposition to claim so much territory that the PAP is too weak to form the Government on its own. But because each swing voter only has a binary choice: either they vote for one side or the other (I’m ignoring the third option where they simply spoil their vote), they can’t very well say “I want to vote 0.6 for the PAP and 0.4 for the Opposition with my vote”. And so we can expect the swing voter bloc to continue being a source of uncertainty for both sides in future elections, as long as swing voters are still convinced that the PAP should be the Government.
2. Voters no longer believe that the PAP needs a ‘strong mandate’ to govern. They also don’t buy into the NCMP scheme.
Throughout the campaign period, the PAP repeatedly exhorted voters to vote for them alone. Granted, they couldn’t very well give any ground to the opposition without a fight. And therefore there was an attempt to equate voting for the PAP as voting for Singapore’s best interests. However, the main message that voters got was this: PAP will only be able to steer Singapore out of the Covid-19 pandemic if it has a strong mandate from the people.
What is a strong mandate, you may ask? While no PAP candidate publicly confirmed it, their incessant harping on the Non-Constituency Member of Parliament (NCMP) scheme as the PAP’s win-win solution for having the PAP in power and a largely de-fanged opposition presence in parliament shows that the PAP truly wanted a parliament where it held every single seat.
Clearly, the electorate has different ideas, handing Sengkang GRC to the WP and slashing the PAP’s margins in previous strongholds such as West Coast, Choa Chu Kang and Tanjong Pagar by double digit percentages. There is no doubt from the results that swing voters are convinced that a PAP supermajority is not good for Singapore. They are no longer convinced that to vote for the opposition is a vote against Singapore. They have realized, as members of a maturing democracy surely must, that one can vote for the opposition, yet still be pro-Singapore.
3. Social Media and the Internet are rewriting the electorate’s perception.
In the past, there was no way to have an easily accessible record of historical events. With the only information source available being biased mainstream media, Singaporeans could only rely on that to fill in the gaps in their memories. Therefore, Operation Coldstore became a myth of the past, and Chee Soon Juan became a crackpot in the eyes of the people, someone who should never be allowed into Parliament.
Fast forward to today. Chee won 45.2% of the votes in Bukit Batok’s Single Member Constituency (SMC). His party-mate, Dr. Paul Tambyah did even better, winning 46.26% of the votes in Bukit Panjang SMC. For someone previously seen as unfit for public office, this is an extremely good result.
Chee has been running for elections in Singapore for a long time, and only now is there a significant change in the way he is perceived (and supported) by the electorate. Why? Because of social media and the internet, two things which the PAP does not have absolute control over. With the ability to conduct interviews with social media personalities as well as upload party videos on Youtube, he has been able to display a side of himself to people that the PAP did not want them to see: someone who is merely human just like them, but who is standing up for what he believes in.
4. Reserved Election Shenanigans and Tan Cheng Block: The electorate has not forgotten.
Tan Cheng Bock almost became our President in 2011. There are many who say that if Tan Kin Lian and Tan Jee Say had not run, Tony Tan would not have been elected. In March 2016, Tan Cheng Bock publicly declared his interest to run for the next Presidential Election that would be held in 2017. The close result of 2011 and Tan Cheng Bock’s imminent candidacy made the upcoming Presidential Election one that was eagerly anticipated.
That is, until the PAP shut down his bid for the presidency just a few months later in September 2016, using its supermajority in Parliament to pass a “reserved election” in which only members of a particular race could take part. Under the new rules that they had drawn up for themselves, it was decreed that only Malays could take part. And not just any Malay. The candidate had to either be a senior executive managing a firm that had S$500 million in shareholders’ equity, or be the Speaker of Parliament or a similarly high post in the public sector (the exact criteria are a bit more in-depth than this, but this is the gist of it. You can find the full criteria here). And who was the Speaker of Parliament at the time? Mdm Halimah, who was conveniently of the right race (Although there was some hooha about her actually being Indian). With the extremely strict private sector criteria and the PAP being able to effectively control who the public sector candidate was, it came as no surprise that Mdm Halimah was declared the only eligible candidate on Nomination Day. A day later, she was Singapore’s President. And all without a single vote cast by any Singaporean.
Of course, the PAP denied that this was a move specifically aimed at blocking Tan Cheng Bock’s bid for the presidency. Chan Chun Sing, Singapore’s current Minister of Trade and Industry, stated in 2017 that the Government was prepared to pay the political price over making these changes to the Constitution.
We can clearly see from the GE2020 results that a price was indeed paid. A loss of almost 9% of vote share is very significant, although a combination of the first-past-the-post rule and the GRC system ensured that the PAP still won 89.2% of the seats in Parliament despite only garnering 61.2% of the votes. On the whole, it’s naught but a scratch to the PAP’s overwhelming dominance in Parliament. The PAP still retains its supermajority and can make changes to the Constitution anytime that it likes. But the swing voters have sent a clear signal that they have not been persuaded by the PAP’s rationale.
5. Swing Voters do not want Racial Politics.
In 2019, Heng Swee Keat, Singapore’s Deputy Prime Minister and the man who is next in line to be Prime Minister (PM) commented that Singapore was not ready to have a non-Chinese PM. He further added that race is an issue that always arises at election-time in Singapore.
Let us now consider the GE2015 results. Tharman Shanmugaratnam, Singapore’s Senior Minister and someone whom many have expressed keenness to be Singapore’s next PM, obtained 79.28% of the vote share in Jurong GRC. This was above even the current Prime Minister Lee Hsien Loong, who scored 78.63% in Ang Mo Kio GRC. Tharman’s score was the highest in the entire election.
And now let us consider the GE2020 results. Tharman scored 74.62% in Jurong, again the highest scorer of the entire election, while Hsien Loong scored 71.91%. So Tharman beat the current PM again, and by an even bigger margin than the last time. Furthermore, Swee Keat, who made the infamous comments above, scored just 53.41% in East Coast.
Yes, I know I’m ignoring a lot of other factors that influenced these results. But don’t these results show conclusively that Heng’s comments were wrong? We have an Indian leading both the current and future PM in both elections, but yet PAP still feels the need to say that Singapore “hasn’t arrived” at a stage where we can vote without race in mind. In fact, this was the same rationale that supposedly led to the reserved presidency as mentioned in my earlier point.
The swing voters have spoken, and it is exceedingly clear to me that the electorate does not care what our highest office-holders are in terms of race, whether it be the PM or the President. Our Singapore pledge firmly states “regardless of race”, and I think the results have shown that we as a people have taken it to heart. But has the PAP?
6. Voters will not be so easily manipulated.
On one hand, Singaporeans were exhorted to stay home during the Covid-19 pandemic. Contact tracing became mandatory, and groups of more than 5 are prohibited.
But on the other hand, we are also told that it’s absolutely necessary to hold an election during this same period, for Singaporeans to wait in long lines and in close proximity to each other as we congregate to cast our vote, all because the PAP needs a strong mandate.
On one hand, Heng Swee Keat lambasted the Worker’s Party, claiming that it was “playing games with voters” over their refusal to confirm if they would accept NCMP seats.
But on the other hand, Heng Swee Keat was moved to the East Coast GRC at the eleventh hour in a surprise move to secure the constituency. (As mentioned above, he was aptly rewarded for this with a razor-thin margin of just 53.41% of the votes.)
On one hand, Masagos Zulkifli, PAP Vice-Chairman stated that “candidates should not be defined by a single moment in time or in their career, but judged instead by their growth throughout their life”. He said this in defense of Ivan Lim, who appears to be the very first candidate in Singaporean politics to have been pushed into retracting his candidacy by the power of non-mainstream media.
But on the other hand, the PAP called on the WP to make clear its stand on Raeesah Khan, a WP candidate who ran (and won) in Sengkang GRC for this election, stating that the Police investigation into Raeesah’s comments made on social media was “a serious matter which goes to the fundamental principles on which our country has been built”.
On one hand, Chan Chun Sing stated in 2015, referring to SingFirst’s policies about giving allowances to the young and the elderly, “Some of them promised you $300 per month. I say, please don’t insult my residents. You think…. they are here to be bribed?”
On the other hand, the PAP Government has just given out several handouts under its many budgets to help Singaporeans cope with the Covid-19 situation. [To be clear, I totally approve of these handouts. What I don’t approve is that the PAP felt the need to lambast similar policies as bribery in the past. Comparing a policy with a crime is a political low blow in my book.]
I could go on, but I think I’ve made my point. And so did the electorate in this election, putting their vote where it counted to show their disdain for the heavy-handedness and double standards that the PAP has displayed for this election.
Conclusion
I don’t say the above to put down the PAP. The PAP would have you believe that to not support them is equivalent to not wanting what’s best for Singapore. This is a false dichotomy that must be stamped out, and I am glad to see our swing voters taking a real stand with this election.
No, I say the above as a harsh but ultimately supportive letter to the PAP. As everyone can see from the results, we all still firmly believe that the PAP should be the Government. We still have faith that PAP has the leadership to take us forward and out of the Covid-19 crisis.
But we also want to send the PAP a strong signal with this vote, to bring them down from their ivory towers and down to the ground. Enough with the double standards. Enough with the heavy-handedness. Singaporeans have clearly stated their desire for a more mature democracy, and that means more alternative voices in Parliament. The PAP needs to stop acting as the father who knows it all, and to start acting as the bigger brother who can work hand in hand with his alternative younger brother towards what’s best for the entire family: Singapore.
There is a real chance that the PAP will not listen, though. As Lee Hsien Loong admitted in a rally in 2006, “if there are 10, 20… opposition members in Parliament… I have to spent my time thinking what is the right way to fix them”.
Now, the PAP has POFMA at its disposal. It still has the supermajority in Parliament, making them able to change any law in Singapore, even the Constitution at will. We have already seen them put these tools to use for its own benefit. Let us see if the PAP will continue as it has always done, or will it take this opportunity to change itself for the better. Whatever the case, we will be watching, and we will be waiting to make our roar heard once again five years down the road.
Majulah Singapura!
Article Ends Here.
Here's the link to the actual article:
https://thethoughtexperiment.org/2020/07/11/ge2020-the-roar-of-the-swing-vote
And here's the link to the other political articles I've written about Singapore:
https://thethoughtexperiment.org/2020/07/07/ge2020-the-state-of-play/
https://thethoughtexperiment.org/2015/09/10/ge2015-voting-wisely/
https://thethoughtexperiment.org/2015/09/05/expectations-of-the-opposition/
submitted by sharingan87 to singapore [link] [comments]

[ALL] Finally finished LIS2 - some idle thoughts

Hey /lifeisstrange, it's been a while since I'd lurked around here or even posted. Things sure are pretty fucked up out there, so why not spend all of our available free time cooped up indoors playing games? Perfect way to kill time these days :)
I started LIS2 a few weeks ago, just finished a few days ago, and have been ruminating over it since. So here I am, just sharing my thoughts about an amazing game that certainly isn't without its flaws.
Apologies in advance for this reddit version of verbal diarrhea: this is just my putting words to digital paper as it comes to me.
First off, I need to state that I really didn't like that Brody just gets an honourable mention at the end of the story. (Though it's worth nothing that he certainly had a coming of age story of his own.) Not even a quick "Thanks for everything" comment on his blog from a Diaz brother? At least for me the entire interaction with Brody changed how I was trying to "raise" Daniel for the rest of the game. If he had such an impact on how I would play the rest of the game as Sean, I would have liked to have had the option to reach out to him, maybe from Claire and Stephen's home in Beaver Creek or from Karen's trailer in Away. I really feel like Brody was my-Daniel's catalyst for
One thing I loved about LIS2 is that you have to live with your choices. All of them. There's no going back. It made the game more stressful, and made me more conscious about what I wanted to say. With Max in LIS1, most choices could be rewound to choose the other options. I usually went first with the "bad" options I wouldn't normally have chosen, just to see their outcomes. Then I'd rewind until I was ready to go with the options I really wanted to take. This time in LIS2 there's only one path of choices you can make, and you have to live with it. I liked the mechanic in LIS1 as it was fun, but I appreciated more the permanent nature of choices in LIS2.
And speaking of choices, I know that a lot of people didn't like Sean. They call(ed) him whiny and a pathetic attempt at portraying a teenager at 16. I actually think that it's a pretty reasonable depiction of a 16 year old in the given situation. What were you doing at 16? Were you on the lam, all while trying to raise a 9/10 year old younger sibling? While I agree that Sean's character was indeed whiny, it was well suited because he's stuck between a rock and a hard place: he has to make decisions for himself and Daniel that would keep them alive and moving. It's one thing to be a latchkey kid and raise your younger sibling with a stocked fridge and a roof over your head. It's an entirely different thing to be doing that without that stocked fridge and roof over your head, and with law enforcement hunting you down. I felt like Dontnod wrote Sean's character as best as they could. Despite his whiny nature -because, again, 16 year olds want freedom and not responsibilities - he was just completely unsure of everything he did. He only knew that he had to take care of Daniel - Daniel was his only focus and every decision had to be made around that. But Sean sure as shit didn't know what was really best, he just kind of winged it and prayed it would work later on. Guess he learned pretty quickly in Ep4 that prayer isn't everything.
Daniel is no different. I feel like Dontnod kind of dropped the ball on Daniel a bit by making him a bit too wide-eyed the entire season. I fully get that he would be this way in Episodes 1 and 2, but by Episode 3 I would have thought Daniel would have sunken into a form of depression. Children that age are emotional sponges. Yeah, the Humboldt County crew were a pretty awesome ragtag family ("friends are the family you choose"), and they seemingly took in and took care of Daniel as one of their own. But by this point in time Daniel should have recognized there's no going back to the life he used to have; I would have surmised that depression would have affected him somewhat by now. I guess Dontnod didn't want to make things any darker for the little 'un than already was?
So that brings me to what I think is the strongest point of the game: it's a coming of age story. It's not Ferris Buller or The Breakfast Club, but it's Dontnod's take on how one poor sod has to grow up, quickly, and take care of his wolf pack of two. I think they did a good job here, even though everything felt so far-fetched. But isn't that exactly what makes Season 2 so great? Despite the primary supernatural element that is never explained; the near comically over-the-top depiction of a religious commune; the biggest private hospital room I've ever seen to date, with a TV, and all for a yet-to-be-charged criminal?!; and the awkward "look, we want to have overtly racist people in this game, but we don't want to actually make them racist, so we'll just imply everything, OK?" characters; all of Sean's choices lead up to "What Will Daniel Do?" at the very end of Ep5 Wolves. It doesn't matter what happened to Daniel in the game, what is more important is what Sean tells Daniel after each of these occurrences; Sean's words are everything in this game. The Morality mechanic seemed a little too binary at times and I wish there was more grey area allowed, but I feel that it still worked out well in the end. Children don't quite understand the greyer areas of the world at large as much as grownups do, so I guess that's why the choices couldn't be so in the game.
Now looking back at the characters, I feel conflicted about them as a whole. I felt like the written dialogue in LIS2 was significantly better than LIS1, but the dialogue was delivered/performed better in LIS1. Does that make sense? The characters were deeper and richer in LIS2, it's just that they didn't deliver their lines as well. Or maybe it was the editing of the dialogue that made it feel more harsh, less natural. Esteban got such little time in the game, but he was the totem that which (my) Sean based all of his decisions on. And Karen was easily my favourite supporting character: she starts off being the devil herself (because she isn't there to defend herself), only for us to find out that she's anything but. She's just broken, just like everybody else the Wolf Brothers meet along their journey, and trying to atone for it. There's something a bit more natural about the characters, how everybody faces some sort of struggle of their own. Again, Dontnod made each struggle too surreal since that's really the only way to fit them in a story arc that only lasts a few hours each, but that they made it such that their struggles are believable is quite the achievement. Joey in the beginning of Episode 4 is a good example of that fake-but-real kind of internal struggle. He likes Sean and even states on a few occasions that he doesn't believe Sean actually killed that cop (more correctly, I think he says that he thinks Sean doesn't have it in him to do whatever it is the police are saying he did). But he also has a job to do, and laws to follow. He also has a life of his own. Joey can't just let Sean go free, that would effectively cause him to be the one who gets locked up. But he wants to help Sean because in his gut he feels it's the right thing to do. Naturally I snuck out without whacking Joey, because I felt like Joey doesn't deserve any consequences of the ensuing jailbreak. Same for the letter to Karen before leaving Away; I addressed it to "Mom" because I felt like her character was trying to be a mom the only way she knew how. She was not a mother in any traditional sense of the word; there's no forgiving the fact that she left the family; she is a person who realized that she would have made the world worse for her young ones if she stayed behind. Was that the right decision? I certainly can't say if it was or wasn't, but I do feel like her character was written to be one who thought she was doing the right thing by leaving, as the alternative was worse. And she implied as much at the motel and in Away. That's why I addressed her as "Mom" because after seeing what she was willing to do, I recognize that she's doing what she thinks a mother should do. It's really weird that she thinks that's what motherhood is, but that's just how she's been wired (and written).
And speaking of motherhood, man alive I was so happy to finally see David in Away. I chose to sacrifice Arcadia Bay in LIS1 so of course the David I met in Away lost Joyce, but gained Chloe. When I first started walking idly towards the silver trailer, I spotted that oh-so-recognizable painting leaning against it. I yelped aloud and willed Sean to run faster in the game, only to find that David wasn't around (yet) (dammit). I really liked that they wrote in David's character as having made amends with Chloe. Unfortunately, it took the deaths of hundreds of people to do so, but it was nice to see that he's understood that life isn't all about following the rules. The picture of our favourite Partners in Crime and Time in his trailer was a really nice nod to LIS1, and the phone conversation between David and Chloe was an especially nice touch. And when Sean was getting ready to leave, it was nice to see some of that tough love from David again. He's a hardass, but he cares. Only now he finally knows how to show that he cares. (I did some Googling to find out what David would be like if I chose to sacrifice Chloe. Turns out he gets divorced from Joyce. What the hell? He basically loses everything? Man, tough luck.)
I actually found myself spending more time roaming around the environment in LIS2 than I did with LIS1. Maybe Dontnot learned their lesson from LIS1 that people love to roam around and look under every rock. I basically did just that in LIS2, and really enjoyed it. The environments were aurally and aesthetically richer than in LIS1, but subtly so. I also felt that the side conversations with characters were also a bit less contrived. However, I think that's due to the nature of not being in and around high school characters this time around. It must be difficult to write "teenager" dialogue.
One thing that I need to compare between LIS1 and LIS2 is the way they handled the "moments". Max would sit and just take in her surroundings whereas Sean would draw. I totally see how these are actually personality traits (Max is an introvert through and through, and a shutterbug, her eyes are the camera; Sean is an artist and an extrovert introvert, pen and paper are his lens), and appreciate them both. I just loved those moments in LIS1 when Max could just sit down and enjoy the scene. The music, the cuts to different angles, all of it. There weren't as many of these kinds of moments in LIS2, even though they had many opportunities with the amazing vistas. Take the canyon landscape at the beginning of Episode 5: a perfect opportunity to just sit and take in the sights and sounds, maybe with Daniel hurling little pebbles over the edge, too.
Now that I realize I've written way more than anybody cares to ever read, I'll wrap up with the use of music. Not just that DN brought back Jonathan Morali, but that they brought back tracks from previous games to bring the player back to certain important moments. For example, when we see Chris again, we hear Sufjan Steven's track lightly playing in the background, reminding players of our time as Captain Spirit one Saturday morning. Another time is when we're talking to David about his past and the "choices" he had to make, we hear a track from LIS1 behind his dialogue. They are really great throwbacks to previous titles, and the emotions they brought out in us. And emotional I was, when David was talking about how he and Chloe had learned to get along in their own way, with the Max and Chloe theme accompanying his storytelling.
OK, one last thing. And this will likely be an unpopular opinion for many, though shared by some. I hope that in the upcoming Tell Me Why title they'll have either either Sean or Daniel be some sort of character in the game. Someone kind of like David was in LIS2. He's a surrogate father figure for a moment, just to remind them that it's going to be hard, but worth it, to do what you have to do. The symbolism of Sean/Daniel showing up is that choices matter, so choose wisely.
And for what it's worth, I got the Lyla Redemption ending.
Alright that's enough brain dumping for me. Go enjoy the rest of your day :)
submitted by unwantedoffspring to lifeisstrange [link] [comments]

Differences between LISP 1.5 and Common Lisp, Part 1:

[Edit: I didn't mean to put a colon in the title.]
In this post we'll be looking at some of the things that make LISP 1.5 and Common Lisp different. There isn't too much surviving LISP 1.5 code, but some of the code that is still around is interesting and worthy of study.
Here are some conventions used in this post of which you might take notice:
Sources are linked sometimes below, but here is a list of links that were helpful while writing this:
The differences between LISP 1.5 and Common Lisp can be classified into the following groups:
  1. Superficial differences—matters of syntax
  2. Conventional differences—matters of code style and form
  3. Fundamental differences—matters of semantics
  4. Library differences—matters of available functions
This post will go through the first three of these groups in that order. A future post will discuss library differences, except for some functions dealing with character-based input and output, since they are a little world unto their own.
[Originally the library differences were part of this post, but it exceeded the length limit on posts (40000 characters)].

Superficial differences.

LISP 1.5 was used initially on computers that had very limited character sets. The machine on which it ran at MIT, the IBM 7090, used a six-bit, binary-coded decimal encoding for characters, which could theoretically represent up to sixty-four characters. In practice, only fourty-six were widely used. The repertoire of this character set consisted of the twenty-six uppercase letters, the nine digits, the blank character '', and the ten special characters '-', '/', '=', '.', '$', ',', '(', ')', '*', and '+'. You might note the absence of the apostrophe/single quote—there was no shorthand for the quote operator in LISP 1.5 because no sensical character was available.
When the LISP 1.5 system read input from cards, it treated the end of a card not like a blank character (as is done in C, TeX, etc.), but as nothing. Therefore the first character of a symbol's name could be the last character of a card, the remaining characters appearing at the beginning of the next card. Lisp's syntax allowed for the omission of almost all whitespace besides that which was used as delimiters to separate tokens.
List syntax. Lists were contained within parentheses, as is the case in Common Lisp. From the beginning Lisp had the consing dot, which was written as a period in LISP 1.5; the interaction between the period when used as the consing dot and the period when used as the decimal point will be described shortly.
In LISP 1.5, the comma was equivalent to a blank character; both could be used to delimit items within a list. The LISP I Programmer's Manual, p. 24, tells us that
The commas in writing S-expressions may be omitted. This is an accident.
Number syntax. Numbers took one of three forms: fixed-point integers, floating-point numbers, and octal numbers. (Of course octal numbers were just an alternative notation for the fixed-point integers.)
Fixed-point integers were written simply as the decimal representation of the integers, with an optional sign. It isn't explicitly mentioned whether a plus sign is allowed in this case or if only a minus sign is, but floating-point syntax does allow an initial plus sign, so it makes sense that the fixed-point number syntax would as well.
Floating-point numbers had the syntax described by the following context-free grammar, where a term in square brackets indicates that the term is optional:
float: [sign] integer '.' [integer] exponent [sign] integer '.' integer [exponent] exponent: 'E' [sign] digit [digit] integer: digit integer digit digit: one of '0' '1' '2' '3' '4' '5' '6' '7' '8' '9' sign: one of '+' '-' 
This grammar generates things like 100.3 and 1.E5 but not things like .01 or 14E2 or 100.. The manual seems to imply that if you wrote, say, (100. 200), the period would be treated as a consing dot [the result being (cons 100 200)].
Floating-point numbers are limited in absolute value to the interval (2-128, 2128), and eight digits are significant.
Octal numbers are defined by the following grammar:
octal: [sign] octal-digits 'Q' [integer] octal-digits: octal-digit [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] octal-digit: one of '0' '1' '2' '3' '4' '5' '6' '7' 
The optional integer following 'Q' is a scale factor, which is a decimal integer representing an exponent with a base of 8. Positive octal numbers behave as one would expect: The value is shifted to the left 3×s bits, where s is the scale factor. Octal was useful on the IBM 7090, since it used thirty-six-bit words; twelve octal digits (which is the maximum allowed in an octal number in LISP 1.5) thus represent a single word in a convenient way that is more compact than binary (but still being easily convertable to and from binary). If the number has a negative sign, then the thirty-sixth bit is logically ored with 1.
The syntax of Common Lisp's numbers is a superset of that of LISP 1.5. The only major difference is in the notation of octal numbers; Common Lisp uses the sharpsign reader macro for that purpose. Because of the somewhat odd semantics of the minus sign in octal numbers in LISP 1.5, it is not necessarily trivial to convert a LISP 1.5 octal number into a Common Lisp expression resulting in the same value.
Symbol syntax. Symbol names can be up to thirty characters in length. While the actual name of a symbol was kept on its property list under the pname indicator and could be any sequence of thirty characters, the syntax accepted by the read program for symbols was limited in a few ways. First, it must not begin with a digit or with either of the characters '+' or '-', and the first two characters cannot be '$'. Otherwise, all the alphanumeric characters, along with the special characters '+', '-', '=', '*', '/', and '$'. The fact that a symbol can't begin with a sign character or a digit has to do with the number syntax; the fact that a symbol can't begin with '$$' has to do with the mechanism by which the LISP 1.5 reader allowed you to write characters that are usually not allowed in symbols, which is described next.
Two dollar signs initiated the reading of what we today might call an "escape sequence". An escape sequence had the form "$$xSx", where x was any character and S was a sequence of up to thirty characters not including x. For example, $$x()x would get the symbol whose name is '()' and would print as '()'. Thus it is similar in purpose to Common Lisp's | syntax. There is a significant difference: It could not be embedded within a symbol, unlike Common Lisp's |. In this respect it is closer to Maclisp's | reader macro (which created a single token) than it is to Common Lisp's multiple escape character. In LISP 1.5, "A$$X()X$" would be read as (1) the symbol A$$X, (2) the empty list, (3) the symbol X.
The following code sets up a $ reader macro so that symbols using the $$ notation will be read in properly, while leaving things like $eof$ alone.
(defun dollar-sign-reader (stream character) (declare (ignore character)) (let ((next (read-char stream t nil t))) (cond ((char= next #\$) (let ((terminator (read-char stream t nil t))) (values (intern (with-output-to-string (name) (loop for c := (read-char stream t nil t) until (char= c terminator) do (write-char c name))))))) (t (unread-char next stream) (with-standard-io-syntax (read (make-concatenated-stream (make-string-input-stream "$") stream) t nil t)))))) (set-macro-character #\$ #'dollar-sign-reader t) 

Conventional differences.

LISP 1.5 is an old programming language. Generally, compared to its contemporaries (such as FORTRANs I–IV), it holds up well to modern standards, but sometimes its age does show. And there were some aspects of LISP 1.5 that might be surprising to programmers familiar only with Common Lisp or a Scheme.
M-expressions. John McCarthy's original concept of Lisp was a language with a syntax like this (from the LISP 1.5 Programmer's Manual, p. 11):
equal[x;y]=[atom[x]→[atom[y]→eq[x;y]; T→F]; equal[car[x];car[Y]]→equal[cdr[x];cdr[y]]; T→F] 
There are several things to note. First is the entirely different phrase structure. It's is an infix language looking much closer to mathematics than the Lisp we know and love. Square brackets are used instead of parentheses, and semicolons are used instead of commas (or blanks). When square brackets do not enclose function arguments (or parameters when to the left of the equals sign), they set up a conditional expression; the arrows separate predicate expressions and consequent expressions.
If that was Lisp, then where do s-expressions come in? Answer: quoting. In the m-expression notation, uppercase strings of characters represent quoted symbols, and parenthesized lists represent quoted lists. Here is an example from page 13 of the manual:
λ[[x;y];cons[car[x];y]][(A B);(C D)] 
As an s-expressions, this would be
((lambda (x y) (cons (car x) y)) '(A B) '(C D)) 
The majority of the code in the manual is presented in m-expression form.
So why did s-expressions stick? There are a number of reasons. The earliest Lisp interpreter was a translation of the program for eval in McCarthy's paper introducing Lisp, which interpreted quoted data; therefore it read code in the form of s-expressions. S-expressions are much easier for a computer to parse than m-expressions, and also more consistent. (Also, the character set mentioned above includes neither square brackets nor a semicolon, let alone a lambda character.) But in publications m-expressions were seen frequently; perhaps the syntax was seen as a kind of "Lisp pseudocode".
Comments. LISP 1.5 had no built-in commenting mechanism. It's easy enough to define a comment operator in the language, but it seemed like nobody felt a need for them.
Interestingly, FORTRAN I had comments. Assembly languages of the time sort of had comments, in that they had a portion of each line/card that was ignored in which you could put any text. FORTRAN was ahead of its time.
(Historical note: The semicolon comment used in Common Lisp comes from Maclisp. Maclisp likely got it from PDP-10 assembly language, which let a semicolon and/or a line break terminate a statement; thus anything following a semicolon is ignored. The convention of octal numbers by default, decimal numbers being indicated by a trailing decimal point, of Maclisp too comes from the assembly language.)
Code formatting. The code in the manual that isn't written using m-expression syntax is generally lacking in meaningful indentation and spacing. Here is an example (p. 49):
(TH1 (LAMBDA (A1 A2 A C) (COND ((NULL A) (TH2 A1 A2 NIL NIL C)) (T (OR (MEMBER (CAR A) C) (COND ((ATOM (CAR A)) (TH1 (COND ((MEMBER (CAR A) A1) A1) (T (CONS (CAR A) A1))) A2 (CDR A) C)) (T (TH1 A1 (COND ((MEMBER (CAR A) A2) A2) (T (CONS (CAR A) A2))) (CDR A) C)))))))) 
Nowadays we might indent it like so:
(TH1 (LAMBDA (A1 A2 A C) (COND ((NULL A) (TH2 A1 A2 NIL NIL C)) (T (OR (MEMBER (CAR A) C) (COND ((ATOM (CAR A)) (TH1 (COND ((MEMBER (CAR A) A1) A1) (T (CONS (CAR A) A1))) A2 (CDR A) C)) (T (TH1 A1 (COND ((MEMBER (CAR A) A2) A2) (T (CONS (CAR A) A2))) (CDR A) C)))))))) 
Part of the lack of formatting stems probably from the primarily punched-card-based programming world of the time; you would see the indented structure only by printing a listing of your code, so there is no need to format the punched cards carefully. LISP 1.5 allowed a very free format, especially when compared to FORTRAN; the consequence is that early LISP 1.5 programs are very difficult to read because of the lack of spacing, while old FORTRAN programs are limited at least to one statement per line.
The close relationship of Lisp and pretty-printing originates in programs developed to produce nicely formatted listings of Lisp code.
Lisp code from the mid-sixties used some peculiar formatting conventions that seem odd today. Here is a quote from Steele and Gabriel's Evolution of Lisp:
This intermediate example is derived from a 1966 coding style:
DEFINE(( (MEMBER (LAMBDA (A X) (COND ((NULL X) F) ((EQ A (CAR X) ) T) (T (MEMBER A (CDR X))) ))) )) 
The design of this style appears to take the name of the function, the arguments, and the very beginning of the COND as an idiom, and hence they are on the same line together. The branches of the COND clause line up, which shows the structure of the cases considered.
This kind of indentation is somewhat reminiscent of the formatting of Algol programs in publications.
Programming style. Old LISP 1.5 programs can seem somewhat primitive. There is heavy use of the prog feature, which is related partially to the programming style that was common at the time and partially to the lack of control structures in LISP 1.5. You could express iteration only by using recursion or by using prog+go; there wasn't a built-in looping facility. There is a library function called for that is something like the early form of Maclisp's do (the later form would be inherited in Common Lisp), but no surviving LISP 1.5 code uses it. [I'm thinking of making another post about converting programs using prog to the more structured forms that Common Lisp supports, if doing so would make the logic of the program clearer. Naturally there is a lot of literature on so called "goto elimination" and doing it automatically, so it would not present any new knowledge, but it would have lots of Lisp examples.]
LISP 1.5 did not have a let construct. You would use either a prog and setq or a lambda:
(let ((x y)) ...) 
is equivalent to
((lambda (x) ...) y) 
Something that stands out immediately when reading LISP 1.5 code is the heavy, heavy use of combinations of car and cdr. This might help (though car and cdr should be left alone when they are used with dotted pairs):
(car x) = (first x) (cdr x) = (rest x) (caar x) = (first (first x)) (cadr x) = (second x) (cdar x) = (rest (first x)) (cddr x) = (rest (rest x)) (caaar x) = (first (first (first x))) (caadr x) = (first (second x)) (cadar x) = (second (first x)) (caddr x) = (third x) (cdaar x) = (rest (first (first x))) (cdadr x) = (rest (second x)) (cddar x) = (rest (rest (first x))) (cdddr x) = (rest (rest (rest x))) 
Here are some higher compositions, even though LISP 1.5 doesn't have them.
(caaaar x) = (first (first (first (first x)))) (caaadr x) = (first (first (second x))) (caadar x) = (first (second (first x))) (caaddr x) = (first (third x)) (cadaar x) = (second (first (first x))) (cadadr x) = (second (second x)) (caddar x) = (third (first x)) (cadddr x) = (fourth x) (cdaaar x) = (rest (first (first (first x)))) (cdaadr x) = (rest (first (second x))) (cdadar x) = (rest (second (first x))) (cdaddr x) = (rest (third x)) (cddaar x) = (rest (rest (first (first x)))) (cddadr x) = (rest (rest (second x))) (cdddar x) = (rest (rest (rest (first x)))) (cddddr x) = (rest (rest (rest (rest x)))) 
Things like defstruct and Flavors were many years away. For a long time, Lisp dialects had lists as the only kind of structured data, and programmers rarely defined functions with meaningful names to access components of data structures that are represented as lists. Part of understanding old Lisp code is figuring out how data structures are built up and what their components signify.
In LISP 1.5, it's fairly common to see nil used where today we'd use (). For example:
(LAMBDA NIL ...) 
instead of
(LAMBDA () ...) 
or (PROG NIL ...)
instead of
(PROG () ...) 
Actually this practice was used in other Lisp dialects as well, although it isn't really seen in newer code.
Identifiers. If you examine the list of all the symbols described in the LISP 1.5 Programmer's Manual, you will notice that none of them differ only in the characters after the sixth character. In other words, it is as if symbol names have only six significant characters, so that abcdef1 and abcdef2 would be considered equal. But it doesn't seem like that was actually the case, since there is no mention of such a limitation in the manual. Another thing of note is that many symbols are six characters or fewer in length.
(A sequence of six characters is nice to store on the hardware on which LISP 1.5 was running. The processor used thirty-six-bit words, and characters were six-bit; therefore six characters fit in a single word. It is conceivable that it might be more efficient to search for names that take only a single word to store than for names that take more than one word to store, but I don't know enough about the computer or implementation of LISP 1.5 to know if that's true.)
Even though the limit on names was thirty characters (the longest symbol names in standard Common Lisp are update-instance-for-different-class and update-instance-for-redefined-class, both thirty-five characters in length), only a few of the LISP 1.5 names are not abbreviated. Things like terpri ("terminate print") and even car and cdr ("contents of adress part of register" and "contents of decrement part of register"), which have stuck around until today, are pretty inscrutable if you don't know what they mean.
Thankfully the modern style is to limit abbreviations. Comparing the names that were introduced in Common Lisp versus those that have survived from LISP 1.5 (see the "Library" section below) shows a clear preference for good naming in Common Lisp, even at the risk of lengthy names. The multiple-value-bind operator could easily have been named mv-bind, but it wasn't.

Fundamental differences.

Truth values. Common Lisp has a single value considered to be false, which happens to be the same as the empty list. It can be represented either by the symbol nil or by (); either of these may be quoted with no difference in meaning. Anything else, when considered as a boolean, is true; however, there is a self-evaluating symbol, t, that traditionally is used as the truth value whenever there is no other more appropriate one to use.
In LISP 1.5, the situation was similar: Just like Common Lisp, nil or the empty list are false and everything else is true. But the symbol nil was used by programmers only as the empty list; another symbol, f, was used as the boolean false. It turns out that f is actually a constant whose value is nil. LISP 1.5 had a truth symbol t, like Common Lisp, but it wasn't self-evaluating. Instead, it was a constant whose permanent value was *t*, which was self-evaluating. The following code will set things up so that the LISP 1.5 constants work properly:
(defconstant *t* t) ; (eq *t* t) is true (defconstant f nil) 
Recall the practice in older Lisp code that was mentioned above of using nil in forms like (lambda nil ...) and (prog nil ...), where today we would probably use (). Perhaps this usage is related to the fact that nil represented an empty list more than it did a false value; or perhaps the fact that it seems so odd to us now is related to the fact that there is even less of a distinction between nil the empty list and nil the false value in Common Lisp (there is no separate f constant).
Function storage. In Common Lisp, when you define a function with defun, that definition gets stored somehow in the global environment. LISP 1.5 stores functions in a much simpler way: A function definition goes on the property list of the symbol naming it. The indicator under which the definition is stored is either expr or fexpr or subr or fsubr. The expr/fexpr indicators were used when the function was interpreted (written in Lisp); the subr/fsubr indicators were used when the function was compiled (or written in machine code). Functions can be referred to based on the property under which their definitions are stored; for example, if a function named f has a definition written in Lisp, we might say that "f is an expr."
When a function is interpreted, its lambda expression is what is stored. When a function is compiled or machine coded, a pointer to its address in memory is what is stored.
The choice between expr and fexpr and between subr and fsubr is based on evaluation. Functions that are exprs and subrs are evaluated normally; for example, an expr is effectively replaced by its lambda expression. But when an fexpr or an fsubr is to be processed, the arguments are not evaluated. Instead they are put in a list. The fexpr or fsubr definition is then passed that list and the current environment. The reason for the latter is so that the arguments can be selectively evaluated using eval (which took a second argument containing the environment in which evaluation is to occur). Here is an example of what the definition of an fexpr might look like, LISP 1.5 style. This function takes any number of arguments and prints them all, returning nil.
(LAMBDA (A E) (PROG () LOOP (PRINT (EVAL (CAR A) E)) (COND ((NULL (CDR A)) (RETURN NIL))) (SETQ A (CDR A)) (GO LOOP))) 
The "f" in "fexpr" and "fsubr" seems to stand for "form", since fexpr and fsubr functions got passed a whole form.
The top level: evalquote. In Common Lisp, the interpreter is usually available interactively in the form of a "Read-Evaluate-Print-Loop", for which a common abbreviation is "REPL". Its structure is exactly as you would expect from that name: Repeatedly read a form, evaluate it (using eval), and print the results. Note that this model is the same as top level file processing, except that the results of only the last form are printed, when it's done.
In LISP 1.5, the top level is not eval, but evalquote. Here is how you could implement evalquote in Common Lisp:
(defun evalquote (operator arguments) (eval (cons operator arguments))) 
LISP 1.5 programs commonly look like this (define takes a list of function definitions):
DEFINE (( (FUNCTION1 (LAMBDA () ...)) (FUNCTION2 (LAMBDA () ...)) ... )) 
which evalquote would process as though it had been written
(DEFINE ( (FUNCTION1 (LAMBDA () ...)) (FUNCTION2 (LAMBDA () ...)) ... )) 
Evaluation, scope, extent. Before further discussion, here the evaluator for LISP 1.5 as presented in Appendix B, translated from m-expressions to approximate Common Lisp syntax. This code won't run as it is, but it should give you an idea of how the LISP 1.5 interpreter worked.
(defun evalquote (function arguments) (if (atom function) (if (or (get function 'fexpr) (get function 'fsubr)) (eval (cons function arguments) nil)) (apply function arguments nil))) (defun apply (function arguments environment) (cond ((null function) nil) ((atom function) (let ((expr (get function 'expr)) (subr (get function 'subr))) (cond (expr (apply expr arguments environment)) (subr ; see below ) (t (apply (cdr (sassoc function environment (lambda () (error "A2")))) arguments environment))))) ((eq (car function 'label)) (apply (caddr function) arguments (cons (cons (cadr function) (caddr function)) arguments))) ((eq (car function) 'funarg) (apply (cadr function) arguments (caddr function))) ((eq (car function) 'lambda) (eval (caddr function) (nconc (pair (cadr function) arguments) environment))) (t (apply (eval function environment) arguments environment)))) (defun eval (form environment) (cond ((null form) nil) ((numberp form) form) ((atom form) (let ((apval (get atom 'apval))) (if apval (car apval) (cdr (sassoc form environment (lambda () (error "A8"))))))) ((eq (car form) 'quote) (cadr form)) ((eq (car form) 'function) (list 'funarg (cadr form) environment)) ((eq (car form) 'cond) (evcon (cdr form) environment)) ((atom (car form)) (let ((expr (get (car form) 'expr)) (fexpr (get (car form) 'fexpr)) (subr (get (car form) 'subr)) (fsubr (get (car form) 'fsubr))) (cond (expr (apply expr (evlis (cdr form) environment) environment)) (fexpr (apply fexpr (list (cdr form) environment) environment)) (subr ; see below ) (fsubr ; see below ) (t (eval (cons (cdr (sassoc (car form) environment (lambda () (error "A9")))) (cdr form)) environment))))) (t (apply (car form) (evlis (cdr form) environment) environment)))) (defun evcon (cond environment) (cond ((null cond) (error "A3")) ((eval (caar cond) environment) (eval (cadar cond) environment)) (t (evcon (cdr cond) environment)))) (defun evlis (list environment) (maplist (lambda (j) (eval (car j) environment)) list)) 
(The definition of evalquote earlier was a simplification to avoid the special case of special operators in it. LISP 1.5's apply can't handle special operators (which is also true of Common Lisp's apply). Hopefully the little white lie can be forgiven.)
There are several things to note about these definitions. First, it should be reiterated that they will not run in Common Lisp, for many reasons. Second, in evcon an error has been corrected; the original says in the consequent of the second branch (effectively)
(eval (cadar environment) environment) 
Now to address the "see below" comments. In the manual it describes the actions of the interpreter as calling a function called spread, which takes the arguments given in a Lisp function call and puts them into the machine registers expected with LISP 1.5's calling convention, and then executes an unconditional branch instruction after updating the value of a variable called $ALIST to the environment passed to eval or to apply. In the case of fsubr, instead of calling spread, since the function will always get two arguments, it places them directly in the registers.
You will note that apply is considered to be a part of the evaluator, while in Common Lisp apply and eval are quite different. Here it takes an environment as its final argument, just like eval. This fact highlights an incredibly important difference between LISP 1.5 and Common Lisp: When a function is executed in LISP 1.5, it is run in the environment of the function calling it. In contrast, Common Lisp creates a new lexical environment whenever a function is called. To exemplify the differences, the following code, if Common Lisp were evaluated like LISP 1.5, would be valid:
(defun weird (a b) (other-weird 5)) (defun other-weird (n) (+ a b n)) 
In Common Lisp, the function weird creates a lexical environment with two variables (the parameters a and b), which have lexical scope and indefinite extent. Since the body of other-weird is not lexically within the form that binds a and b, trying to make reference to those variables is incorrect. You can thwart Common Lisp's lexical scoping by declaring those variables to have indefinite scope:
(defun weird (a b) (declare (special a b)) (other-weird 5)) (defun other-weird (n) (declare (special a b)) (+ a b n)) 
The special declaration tells the implementation that the variables a and b are to have indefinite scope and dynamic extent.
Let's talk now about the funarg branch of apply. The function/funarg device was introduced some time in the sixties in an attempt to solve the scoping problem exemplified by the following problematic definition (using Common Lisp syntax):
(defun testr (x p f u) (cond ((funcall p x) (funcall f x)) ((atom x) (funcall u)) (t (testr (cdr x) p f (lambda () (testr (car x) p f u)))))) 
This function is taken from page 11 of John McCarthy's History of Lisp.
The only problematic part is the (car x) in the lambda in the final branch. The LISP 1.5 evaluator does little more than textual substitution when applying functions; therefore (car x) will refer to whatever x is currently bound whenever the function (lambda expression) is applied, not when it is written.
How do you fix this issue? The solution employed in LISP 1.5 was to capture the environment present when the function expression is written, using the function operator. When the evaluator encounters a form that looks like (function f), it converts it into (funarg f environment), where environment is the current environment during that call to eval. Then when apply gets a funarg form, it applies the function in the environment stored in the funarg form instead of the environment passed to apply.
Something interesting arises as a consequence of how the evaluator works. Common Lisp, as is well known, has two separate name spaces for functions and for variables. If a Common Lisp implementation encounters
(lambda (f x) (f x)) 
the result is not a function applying one of its arguments to its other argument, but rather a function applying a function named f to its second argument. You have to use an operator like funcall or apply to use the functional value of the f parameter. If there is no function named f, then you will get an error. In contrast, LISP 1.5 will eventually find the parameter f and apply its functional value, if there isn't a function named f—but it will check for a function definition first. If a Lisp dialect that has a single name space is called a "Lisp-1", and one that has two name spaces is called a "Lisp-2", then I guess you could call LISP 1.5 a "Lisp-1.5"!
How can we deal with indefinite scope when trying to get LISP 1.5 programs to run in Common Lisp? Well, with any luck it won't matter; ideally the program does not have any references to variables that would be out of scope in Common Lisp. However, if there are such references, there is a fairly simple fix: Add special declarations everywhere. For example, say that we have the following (contrived) program, in which define has been translated into defun forms to make it simpler to deal with:
(defun f (x) (prog (m) (setq m a) (setq a 7) (return (+ m b x)))) (defun g (l) (h (* b a))) (defun h (i) (/ l (f (setq b (setq a i))))) (defun p () (prog (a b i) (setq a 4) (setq b 6) (setq i 3) (return (g (f 10))))) 
The result of calling p should be 10/63. To make it work, add special declarations wherever necessary:
(defun f (x) (declare (special a b)) (prog (m) (setq m a) (setq a 7) (return (+ m b x)))) (defun g (l) (declare (special a b l)) (h (* b a))) (defun h (i) (declare (special a b l i)) (/ l (f (setq b (setq a i))))) (defun p () (prog (a b i) (declare (special a b i)) (setq a 4) (setq b 6) (setq i 3) (return (g (f 10))))) 
Be careful about the placement of the declarations. It is required that the one in p be inside the prog, since that is where the variables are bound; putting it at the beginning (i.e., before the prog) would do nothing because the prog would create new lexical bindings.
This method is not optimal, since it really doesn't help too much with understanding how the code works (although being able to see which variables are free and which are bound, by looking at the declarations, is very helpful). A better way would be to factor out the variables used among several functions (as long as you are sure that it is used in only those functions) and put them in a let. Doing that is more difficult than using global variables, but it leads to code that is easier to reason about. Of course, if a variable is used in a large number of functions, it might well be a better choice to create a global variable with defvar or defparameter.
Not all LISP 1.5 code is as bad as that example!
Join us next time as we look at the LISP 1.5 library. In the future, I think I'll make some posts talking about getting specific programs running. If you see any errors, please let me know.
submitted by kushcomabemybedtime to lisp [link] [comments]

MAME 0.218

MAME 0.218

It’s time for MAME 0.218, the first MAME release of 2020! We’ve added a couple of very interesting alternate versions of systems this month. One is a location test version of NMK’s GunNail, with different stage order, wider player shot patterns, a larger player hitbox, and lots of other differences from the final release. The other is The Last Apostle Puppetshow, an incredibly rare export version of Home Data’s Reikai Doushi. Also significant is a newer version Valadon Automation’s Super Bagman. There’s been enough progress made on Konami’s medal games for a number of them to be considered working, including Buttobi Striker, Dam Dam Boy, Korokoro Pensuke, Shuriken Boy and Yu-Gi-Oh Monster Capsule. Don’t expect too much in terms of gameplay though — they’re essentially gambling games for children.
There are several major computer emulation advances in this release, in completely different areas. Possibly most exciting is the ability to install and run Windows NT on the MIPS Magnum R4000 “Jazz” workstation, with working networking. With the assistance of Ash Wolf, MAME now emulates the Psion Series 5mx PDA. Psion’s EPOC32 operating system is the direct ancestor of the Symbian operating system, that powered a generation of smartphones. IDE and SCSI hard disk support for Acorn 8-bit systems has been added, the latter being one of the components of the BBC Domesday Project system. In PC emulation, Windows 3.1 is now usable with S3 ViRGE accelerated 2D video drivers. F.Ulivi has contributed microcode-level emulation of the iSBC-202 floppy controller for the Intel Intellec MDS-II system, adding 8" floppy disk support.
Of course there are plenty of other improvements and additions, including re-dumps of all the incorrectly dumped GameKing cartridges, disassemblers for PACE, WE32100 and “RipFire” 88000, better Geneve 9640 emulation, and plenty of working software list additions. You can get the source and 64-bit Windows binary packages from the download page (note that 32-bit Windows binaries and “zip-in-zip” source code are no longer supplied).

MAME Testers Bugs Fixed

New working machines

New working clones

Machines promoted to working

Clones promoted to working

New machines marked as NOT_WORKING

New clones marked as NOT_WORKING

New working software list additions

Software list items promoted to working

New NOT_WORKING software list additions

Source Changes

submitted by cuavas to emulation [link] [comments]

Wall Street Week Ahead for the trading week beginning October 28th, 2019

Good Saturday morning to all of you here on wallstreetbets. I hope everyone on this sub made out pretty nicely in the market this past week, and is ready for the new trading week ahead.
Here is everything you need to know to get you ready for the trading week beginning October 28th, 2019.

The Fed and Apple earnings will make or break market’s return to record highs in the week ahead - (Source)

Stocks will try in the week ahead to break the all-time highs set earlier in the year as a slew of S&P 500 companies get set to report.
Stock prices are bumping up against their highs, but whether they can burst through and hold gains may, for the near term, depend on what investors hear from Jerome Powell in the week ahead.
In a week stacked with major events, the Fed’s two-day meeting is likely to be the high point. The Federal Open Market Committee is expected to make its third quarter point interest rate cut Wednesday afternoon, followed by comments form Fed Chairman Powell. Those comments could be his most important message of the next few months, as investors watch to see whether he holds the door open to future rate cuts, or signals it’s time to pause, as some economists expect.
“Our view is they’ll be done after this. We’re not expecting a cut in December, and we’re not expecting cuts next year. The economy, in my mind, looks like it’s stabilizing, and there should be more evidence of that in the next couple of weeks. focusing on the labor market is the key thing,” said Drew Matus, chief market strategist at MetLife Investment Management. If the labor market holds up, expectations for rate cuts should decline. “I do think the dissenters are arguing they shouldn’t be cutting at all.”
But Matus’ view is just one of many on Wall Street. Some economists expect another cut in December, while others expect one or more cuts next year, depending on how they view the economy. Goldman Sachs economists laid out a case where the Fed will clearly signal that it plans to pause after Wednesday.
All of this could make for volatility in stocks and bonds, depending on which market view prevails in Powell’s comments. “It’s going to be choppy going into the Fed,” said Andrew Brenner of National Alliance. In the past week, yields were higher with the 10-year Treasury yield touching 1.8% Friday.
The S&P 500 was up 1.2% for the week, ending at 3,022, just below its closing high. On Friday, it briefly traded above the July 26 high of 3,025. The Dow ended the week with a gain of 0.7%, at 26,956, and it remains about 1% below its closing high.
In addition, the earnings calendar remains heavy with about 145 S&P 500 companies releasing earnings, including Alphabet Monday and big oil Exxon Mobil and Chevron Friday. On Wednesday, earnings are expected from Apple, which is setting new highs of its own.

Big economic reports

On top of that, November kicks off Friday in what looks to be the most important day for economic data of the new month. Besides the critical monthly employment report, there is the key ISM manufacturing report, expected to show a contraction in manufacturing activity for a third month.
Both reports could be distorted by the GM strike, which is expected to result in an October employment report with fewer than 100,000 jobs. According to Refinitiv, total non farm payrolls are expected to be 90,000, while manufacturing jobs are expected to decline by 50,000. That would include the impact of GM workers, but also the employees of the many suppliers and services that support the car company’s manufacturing operations.
“The jobs number will be big, but the ISM could be bigger. If that turns up, like Markit [PMI] suggested, that could be a big deal,” said Leuthold Group Chief Investment Strategist James Paulsen. On Thursday, Markit flash PMI manufacturing data for October was higher than expected, and still has not shown a contraction.
“If it turns up, I think that’s to affect a lot of people and how they feel about things. That could take on a whole new dimension of what happens to Wall Street earnings estimates,” he said.
Manufacturing data has dragged, due to the impact of tariffs and the trade war, and some big companies have taken a hit as a result, like Caterpillar which on Wednesday reported weaker than expected earnings and sales. Caterpillar also cut its outlook, in large part due to weakness in China. Caterpillar shares were slammed but on Friday, the stock was bouncing back by 3.5%.

Stocks at ‘inflection point’

Quincy Krosby, Prudential Financial’s chief market strategist, said the fact Caterpillar was able to come back at the end of the week was a positive for the market, which she says is now entering the late year seasonal period where stocks typically do well. At the same time, she said news for the market looks like it’s about to get “less bad.”
″″Less bad’ is not a full fledged agreement with China. Less bad is a truce. It means that Dec. 15 extension in tariffs does not happen,” she said, adding the market appears to be at an inflection point with investors expecting an agreement of some type between President Donald Trump and President Xi Jinping when they meet in November.
″‘I’m not bullish. I’m not bearish. I’m optimistic. This market has been led by the defensive sectors. You’re starting to see that move into consumer discretionary. It’s telling you the market is seeing growth, albeit it not stellar growth, but when it gets ‘less bad’ you’re going to see that it’s being reflected in this inflection point in the market,” said said. “We’re seeing a move more and more into the cyclical and growth sectors, and by the way, we’re seeing a steepening of the yield curve.”
The yield curve represents the difference between the yields of two different duration Treasury securities. When the curve inverts, the yield on the shorter duration security, in this case the 2-year has become higher than that of say, the 10-year. That is one part of the curve that was temporarily inverted, and if it stayed inverted it would be a recession warning.
The 10-year has been moving higher, and the 1.80% level will be important if the yield can stay above it.
“If it pushes through 1.80, you’re going to take the inversion out, by the bond market, not the Fed,” Paulsen said. Paulsen said it would be a sign of confidence in the economy if yields can push higher.
The Fed taking a pause may add to that sense. “I think most people think one more cut and done,” he said. “The bigger news will be what [Powell] says in that press conference. He can go pretty off script sometimes.”

‘Greater optimsim’ in market

Paulsen said stocks could be in a good period, and earnings news seems to be already priced in. “The data by and large has been okay. You have earnings that are okay, and there’s no sense of imminent recession. It just seems there’s greater optimism,” he said.
Of the approximately 200 S&P companies that reported by Friday morning, more than 78% have beaten on earnings per share, according to I/B/E/S data from Refinitiv. Earnings are expected to decline by 2% for the third quarter, based on estimates and results from companies that already reported.
Paulsen said there’s some sense in the market that Brexit will not end in a worst case scenario, but it is something to watch in the week ahead as British lawmakers decide whether to hold an election.
Jack Ablin, chief investment officer with Cresset Wealth Advisors, said he thinks Brexit would be a bigger deal than the trade agreement for the world economy, if it goes poorly, with the U.K. leaving the European Union with no deal. “A no deal Brexit is likely to take 2 percentage points off of British growth...It would take 1% off European growth...I think that’s significant,” Ablin said. “I think investors are underplaying it because it’s so binary. It’s hard to position for a binary outcome. If we get some resolution there, to me, that has the biggest impact for the markets.”

This past week saw the following moves in the S&P:

(CLICK HERE FOR THE FULL S&P TREE MAP FOR THE PAST WEEK!)

Major Indices for this past week:

(CLICK HERE FOR THE MAJOR INDICES FOR THE PAST WEEK!)

Major Futures Markets as of Friday's close:

(CLICK HERE FOR THE MAJOR FUTURES INDICES AS OF FRIDAY!)

Economic Calendar for the Week Ahead:

(CLICK HERE FOR THE FULL ECONOMIC CALENDAR FOR THE WEEK AHEAD!)

Sector Performance WTD, MTD, YTD:

(CLICK HERE FOR FRIDAY'S PERFORMANCE!)
(CLICK HERE FOR THE WEEK-TO-DATE PERFORMANCE!)
(CLICK HERE FOR THE MONTH-TO-DATE PERFORMANCE!)
(CLICK HERE FOR THE 3-MONTH PERFORMANCE!)
(CLICK HERE FOR THE YEAR-TO-DATE PERFORMANCE!)
(CLICK HERE FOR THE 52-WEEK PERFORMANCE!)

Percentage Changes for the Major Indices, WTD, MTD, QTD, YTD as of Friday's close:

(CLICK HERE FOR THE CHART!)

S&P Sectors for the Past Week:

(CLICK HERE FOR THE CHART!)

Major Indices Pullback/Correction Levels as of Friday's close:

(CLICK HERE FOR THE CHART!

Major Indices Rally Levels as of Friday's close:

(CLICK HERE FOR THE CHART!)

Most Anticipated Earnings Releases for this week:

(CLICK HERE FOR THE CHART!)

Here are the upcoming IPO's for this week:

(CLICK HERE FOR THE CHART!)

Friday's Stock Analyst Upgrades & Downgrades:

(CLICK HERE FOR THE CHART LINK #1!)
(CLICK HERE FOR THE CHART LINK #2!)
(CLICK HERE FOR THE CHART LINK #3!)
(CLICK HERE FOR THE CHART LINK #4!)

Bullish Halloween Trading Strategy Treat Next Week

Next week provides a special short-term seasonal opportunity, one of the most consistent of the year. The last 4 trading days of October and the first 3 trading days of November have a stellar record the last 25 years. From the tables below: * Dow up 19 of last 25 years, average gain 2.1%, median gain 1.4%. * S&P up 21 of last 25 years, average gain 2.1%, median gain 1.5%. * NASDAQ up 21 of last 25 years, average gain 2.7%, median gain 2.3%. * Russell 2000 19 of last 25 years, average gain 2.2%, median gain 2.5%.
Many refer to our Best Six Months Tactical Seasonal Switching Strategy as the Halloween Indicator or Halloween Strategy and of course “Sell in May”. These catch phrases highlight our discovery that was first published in 1986 in the 1987 Stock Trader’s Almanac that most of the market’s gains are made from October 31 to April 30, while the market goes sideways to down from May through October.
Since 1950 DJIA is up 7.5% November-April and up only 0.6% May-October. We encouraged folks not to fear Octoberphobia early this month and wait for our MACD Buy Signal which came on October 11. We have been positioning more bullishly since in sector and major U.S. market ETFs and with a new basket of stocks. But the next seven days have been a historically bullish trade.
(CLICK HERE FOR THE CHART!)
(CLICK HERE FOR THE CHART!)
(CLICK HERE FOR THE CHART!)
(CLICK HERE FOR THE CHART!)

Normally a top month, November has been lackluster in Pre-Election Years

November maintains its status among the top performing months as fourth-quarter cash inflows from institutions drive November to lead the best consecutive three-month span November-January. However, the month has taken hits during bear markets and November 2000, down –22.9% (undecided election and a nascent bear), was NASDAQ’s second worst month on record—only October 1987 was worse.
November begins the “Best Six Months” for the DJIA and S&P 500, and the “Best Eight Months” for NASDAQ. Small caps come into favor during November, but don’t really take off until the last two weeks of the year. November is the number-two DJIA (since 1950), NASDAQ (since 1971) and Russell 2000 (since 1979) month. November is best for S&P 500 (since 1950) and Russell 1000’s (since 1979).
(CLICK HERE FOR THE CHART!)
In pre-election years, November’s performance is noticeably weaker. DJIA has advanced in nine of the last 17 pre-election years since 1950 with an average gain of 0.3%. S&P 500 has been up in 10 of the past 17 pre-election years, also gaining on average a rather paltry 0.3%. Small-caps and techs perform better with Russell 2000 climbing in 6 of the past 10 pre-election years, averaging 1.2%. NASDAQ has been up in 7 of the last 12 pre-election year Novembers with an average 0.9% gain. Contributing to pre-election year November’s weaker performance are nasty declines in 1987, 1991 and 2007.

Q4 Rally Is Real. Don’t Let 2018 Spook You

Understandably folks are apprehensive about the perennial fourth quarter rally this year after the debacle that culminated in the Christmas Eve Crumble in 2018. But the history is clear. The fourth quarter is the best quarter of the year going back to 1949, except for NASDAQ where Q1 leads Q4 by 4.5% to 4.0%, since 1971.
Historically, the “Sweet Spot” of the 4-Year Election Cycle is the three-quarter span from Q4 Midterm Year through Q2 Pre-Election Year, averaging a gain of 19.3% for DJIA and 20.0% for S&P 500 since 1949 and 29.3% for NASDAQ since 1971. Conversely the weakest two-quarter span is Q2-Q3 of the Midterm Year, averaging a loss of -1.2% for DJIA and -1.5% for S&P 500 since 1949 and -5.0% for NASDAQ since 1971.
Market action was impacted by some more powerful forces in 2018 that trumped (no pun intended) seasonality. Q2-Q3 was up 9.8% for DJIA, 10.3% for S&P and 13.9% for NASDAQ. Q4 was horrible, down -11.8% for DJIA, -14.0% for S&P and -17.5% for NASDAQ.Q1-Q2 of pre-election year, especially Q1 gained all that back.
Pre-Election year Q4 is still one of the best quarters of the 4-Year Cycle, ranked 5th, for average gains of 2.6% for DJIA and 3.2% for S&P since 1949 and 5.4% for NASDAQ. Additionally, from the Pre-Election Seasonal Pattern we updated in last Friday’s post, you can see how the market tends to make a high near yearend in the Pre-Election Year. So, save some new unexpected outside event, Q4 Market Magic is expected to impress once again this year.
(CLICK HERE FOR THE CHART!)

2019 May Be One of the Best Years Ever

“Everything is awesome, when you’re living out a dream.” The Lego Movie
As the S&P 500 Index continues to flirt with new record highs, something under the surface is taking place that is making 2019 extremely special. Or dare we say, “awesome”.
First, let’s look back at last year. 2018 was the first year since 1969 in which both the S&P 500 (stocks) and the 10-year Treasury bond (bonds) both finished the year with a negative return. Toss in the fact that gold and West Texas Intermediate (WTI) crude oil were both down last year, and it was one of the worst years ever for a diversified portfolio.
“As bad as last year was for investors, 2019 is a mirror image, with stocks, bonds, gold, and crude oil all potentially finishing the year up double digits for the first time in history,” explained LPL Senior Market Strategist Ryan Detrick.
As shown in the LPL Chart of the Day, it has been a great year for stocks, bonds, gold, and crude oil. Of course, there are still more than two months to go in 2019, but this year is shaping up to be one of the best years ever for these four important assets.
(CLICK HERE FOR THE CHART!)

An Early Look at Earnings

We're now in the thick of the Q3 earnings reporting period with 130 companies reporting since just the close last night. As shown in our Earnings Explorer snapshot below, earnings will be in overdrive for the next two weeks before dying down in mid-November.
(CLICK HERE FOR THE CHART!)
Through yesterday's close, 248 companies had reported so far this season, and 75% of them had beaten consensus bottom-line EPS estimates. However, just 63% of stocks have beaten sales estimates, and more companies have lowered guidance than raised guidance. In terms of stock price reaction to reports this season, so far investors have seen earnings as relatively bullish as the average stock that has reported has gained 0.60% on its earnings reaction day. Below we show another snapshot from our Earnings Explorer featuring the aggregate results of this season's reports and a list of the stocks that have reacted the most positively to earnings. Four stocks so far have gained more than 20% on their earnings reaction days -- PETS, BIIB, APHA, and LLNW.
(CLICK HERE FOR THE CHART!)
We provide clients with a beat-rate monitor on our Earnings Explorer page as well. Below is a chart showing the rolling 3-month EPS and sales beat rates for US companies over the last 5 years. After a dip in the EPS beat rate earlier in the year, we've seen it steadily increase over the last few months up to its current level of 64.46%. That's more than five percentage points above the historical average of 59.37%.
In terms of sales, 57.87% of companies have beaten top-line estimates over the last 3 months, which is much closer to the historical average than the bottom-line beat rate.
(CLICK HERE FOR THE CHART!)

Banks - On To The Next Test

It has been a pretty monumental two weeks for the KBW Bank index. Since the close on 10/8, the index has rallied just under 9% as earnings reports from some of the largest US banks received a warm welcome from Wall Street. The index is now once again testing the top-end of its range, one which it has unsuccessfully tested multiple times in the last year. If you think the repeated tests of 3,000 for the S&P 500 over the last 18 months have been dramatic, the current go around with 103 for the KBW Bank Index has been the sixth such test in the last year! We would also note that prior to last year's fourth quarter downturn, the same level that has been acting as resistance for the KBW Bank index was previously providing support.
In the case of each prior failed break above 103 for the KBW Bank index, sell-offs of at least 5% (and usually 10%+) followed, but one thing the index has going for it even if the sixth time isn't the charm is that just yesterday it broke above its downtrend that has been in place since early 2018. The group has passed one test at least! From here, if we do see a pullback, that former downtrend line should provide support.
(CLICK HERE FOR THE CHART!)
Turning to the KBW Index's individual components, the table below lists each of the 24 stocks in the index along with how each one has performed since the index's recent low on 10/8 and on a YTD basis (sorted by performance since 10/8). In the slightly more than two weeks since the index's short-term low, every stock in the index is up and up by at least 4%. That's a pretty broad rally!
Leading the way to the upside, State Street (STT) has rallied nearly 20%, while First Republic (FRC), Northern Trust (NTRS), and Bank of America (BAC) have jumped more than 13%. In the case of STT, the rally of the last two-weeks has also moved the stock into the green on a YTD basis.
(CLICK HERE FOR THE CHART!)

STOCK MARKET VIDEO: Stock Market Analysis Video for Week Ending October 25th, 2019

(CLICK HERE FOR THE YOUTUBE VIDEO!)

STOCK MARKET VIDEO: ShadowTrader Video Weekly 10.27.19

(CLICK HERE FOR THE YOUTUBE VIDEO!)
Here are the most notable companies (tickers) reporting earnings in this upcoming trading week ahead-
  • $AAPL
  • $AMD
  • $FB
  • $T
  • $SHOP
  • $HEXO
  • $BYND
  • $SPOT
  • $GOOGL
  • $MA
  • $BABA
  • $WBA
  • $GE
  • $SBUX
  • $TWLO
  • $MRK
  • $GRUB
  • $ABBV
  • $ON
  • $PFE
  • $ENPH
  • $QSR
  • $GM
  • $MO
  • $AWI
  • $L
  • $TEX
  • $AMG
  • $BMY
  • $XOM
  • $CHKP
  • $AKAM
  • $CTB
  • $PINS
  • $EXAS
  • $EPD
  • $KHC
  • $ELY
  • $AMGN
  • $CI
  • $X
  • $GLW
  • $LYFT
  • $MCY
  • $DO
  • $AYX
  • $YUM
(CLICK HERE FOR NEXT WEEK'S MOST NOTABLE EARNINGS RELEASES!)
(CLICK HERE FOR NEXT WEEK'S HIGHEST VOLATILITY EARNINGS RELEASES!)
(CLICK HERE FOR MOST ANTICIPATED EARNINGS RELEASES FOR THE NEXT 5 WEEKS!)
Below are some of the notable companies coming out with earnings releases this upcoming trading week ahead which includes the date/time of release & consensus estimates courtesy of Earnings Whispers:

Monday 10.28.19 Before Market Open:

(CLICK HERE FOR MONDAY'S PRE-MARKET EARNINGS TIME & ESTIMATES!)

Monday 10.28.19 After Market Close:

(CLICK HERE FOR MONDAY'S AFTER-MARKET EARNINGS TIME & ESTIMATES LINK #1!)
(CLICK HERE FOR MONDAY'S AFTER-MARKET EARNINGS TIME & ESTIMATES LINK #2!)

Tuesday 10.29.19 Before Market Open:

(CLICK HERE FOR TUESDAY'S PRE-MARKET EARNINGS TIME & ESTIMATES LINK #1!)
(CLICK HERE FOR TUESDAY'S PRE-MARKET EARNINGS TIME & ESTIMATES LINK #2!)

Tuesday 10.29.19 After Market Close:

(CLICK HERE FOR TUESDAY'S AFTER-MARKET EARNINGS TIME & ESTIMATES LINK #1!)
(CLICK HERE FOR TUESDAY'S AFTER-MARKET EARNINGS TIME & ESTIMATES LINK #2!)

Wednesday 10.30.19 Before Market Open:

(CLICK HERE FOR WEDNESDAY'S PRE-MARKET EARNINGS TIME & ESTIMATES LINK #1!)
(CLICK HERE FOR WEDNESDAY'S PRE-MARKET EARNINGS TIME & ESTIMATES LINK #2!)

Wednesday 10.30.19 After Market Close:

(CLICK HERE FOR WEDNESDAY'S AFTER-MARKET EARNINGS TIME & ESTIMATES LINK #1!)
(CLICK HERE FOR WEDNESDAY'S AFTER-MARKET EARNINGS TIME & ESTIMATES LINK #2!)
(CLICK HERE FOR WEDNESDAY'S AFTER-MARKET EARNINGS TIME & ESTIMATES LINK #3!)
(CLICK HERE FOR WEDNESDAY'S AFTER-MARKET EARNINGS TIME & ESTIMATES LINK #4!)

Thursday 10.31.19 Before Market Open:

(CLICK HERE FOR THURSDAY'S PRE-MARKET EARNINGS TIME & ESTIMATES LINK #1!)
(CLICK HERE FOR THURSDAY'S PRE-MARKET EARNINGS TIME & ESTIMATES LINK #2!)
(CLICK HERE FOR THURSDAY'S PRE-MARKET EARNINGS TIME & ESTIMATES LINK #3!)

Thursday 10.31.19 After Market Close:

(CLICK HERE FOR THURSDAY'S AFTER-MARKET EARNINGS TIME & ESTIMATES LINK #1!)
(CLICK HERE FOR THURSDAY'S AFTER-MARKET EARNINGS TIME & ESTIMATES LINK #2!)

Friday 11.1.19 Before Market Open:

(CLICK HERE FOR FRIDAY'S PRE-MARKET EARNINGS TIME & ESTIMATES!)

Friday 11.1.19 After Market Close:

(CLICK HERE FOR FRIDAY'S AFTER-MARKET EARNINGS TIME & ESTIMATES!)

Apple, Inc. $246.58

Apple, Inc. (AAPL) is confirmed to report earnings at approximately 4:30 PM ET on Wednesday, October 30, 2019. The consensus earnings estimate is $2.84 per share on revenue of $62.57 billion and the Earnings Whisper ® number is $2.93 per share. Investor sentiment going into the company's earnings release has 72% expecting an earnings beat The company's guidance was for earnings of $2.59 to $2.93 per share. Consensus estimates are for earnings to decline year-over-year by 2.41% with revenue decreasing by 0.52%. Short interest has increased by 13.2% since the company's last earnings release while the stock has drifted higher by 13.9% from its open following the earnings release to be 25.3% above its 200 day moving average of $196.73. Overall earnings estimates have been revised higher since the company's last earnings release. On Monday, October 14, 2019 there was some notable buying of 28,061 contracts of the $220.00 put expiring on Friday, November 1, 2019. Option traders are pricing in a 4.5% move on earnings and the stock has averaged a 5.1% move in recent quarters.

(CLICK HERE FOR THE CHART!)

Advanced Micro Devices, Inc. $32.71

Advanced Micro Devices, Inc. (AMD) is confirmed to report earnings at approximately 4:20 PM ET on Tuesday, October 29, 2019. The consensus earnings estimate is $0.18 per share on revenue of $1.80 billion and the Earnings Whisper ® number is $0.18 per share. Investor sentiment going into the company's earnings release has 64% expecting an earnings beat. Consensus estimates are for year-over-year earnings growth of 80.00% with revenue increasing by 8.89%. Short interest has increased by 21.5% since the company's last earnings release while the stock has drifted higher by 2.0% from its open following the earnings release to be 15.4% above its 200 day moving average of $28.35. Overall earnings estimates have been revised lower since the company's last earnings release. On Thursday, October 17, 2019 there was some notable buying of 28,665 contracts of the $29.00 put expiring on Friday, December 20, 2019. Option traders are pricing in a 9.6% move on earnings and the stock has averaged a 12.8% move in recent quarters.

(CLICK HERE FOR THE CHART!)

Facebook Inc. $187.89

Facebook Inc. (FB) is confirmed to report earnings at approximately 4:05 PM ET on Wednesday, October 30, 2019. The consensus earnings estimate is $1.90 per share on revenue of $17.33 billion and the Earnings Whisper ® number is $2.02 per share. Investor sentiment going into the company's earnings release has 79% expecting an earnings beat. Consensus estimates are for year-over-year earnings growth of 7.95% with revenue increasing by 26.25%. Short interest has decreased by 0.2% since the company's last earnings release while the stock has drifted lower by 9.1% from its open following the earnings release to be 5.2% above its 200 day moving average of $178.54. Overall earnings estimates have been revised higher since the company's last earnings release. On Tuesday, October 22, 2019 there was some notable buying of 20,043 contracts of the $325.00 call expiring on Friday, January 15, 2021. Option traders are pricing in a 5.9% move on earnings and the stock has averaged a 8.4% move in recent quarters.

(CLICK HERE FOR THE CHART!)

AT&T Corp. $36.91

AT&T Corp. (T) is confirmed to report earnings at approximately 6:20 AM ET on Monday, October 28, 2019. The consensus earnings estimate is $0.93 per share on revenue of $45.52 billion and the Earnings Whisper ® number is $0.94 per share. Investor sentiment going into the company's earnings release has 57% expecting an earnings beat. Consensus estimates are for year-over-year earnings growth of 3.33% with revenue decreasing by 0.48%. The stock has drifted higher by 14.7% from its open following the earnings release to be 11.1% above its 200 day moving average of $33.21. Overall earnings estimates have been revised lower since the company's last earnings release. On Tuesday, October 8, 2019 there was some notable buying of 308,450 contracts of the $30.00 call expiring on Friday, January 17, 2020. Option traders are pricing in a 4.1% move on earnings and the stock has averaged a 5.1% move in recent quarters.

(CLICK HERE FOR THE CHART!)

Shopify Inc. $317.45

Shopify Inc. (SHOP) is confirmed to report earnings at approximately 7:00 AM ET on Tuesday, October 29, 2019. The consensus earnings estimate is $0.11 per share on revenue of $381.46 million and the Earnings Whisper ® number is $0.16 per share. Investor sentiment going into the company's earnings release has 74% expecting an earnings beat The company's guidance was for revenue of $377.00 million to $382.00 million. Consensus estimates are for year-over-year earnings growth of 83.33% with revenue increasing by 41.25%. Short interest has decreased by 19.4% since the company's last earnings release while the stock has drifted lower by 5.0% from its open following the earnings release to be 17.7% above its 200 day moving average of $269.78. Overall earnings estimates have been revised lower since the company's last earnings release. On Tuesday, October 22, 2019 there was some notable buying of 1,505 contracts of the $360.00 call expiring on Friday, November 1, 2019. Option traders are pricing in a 9.4% move on earnings and the stock has averaged a 6.6% move in recent quarters.

(CLICK HERE FOR THE CHART!)

HEXO Corp. $2.38

HEXO Corp. (HEXO) is confirmed to report earnings after the market closes on Monday, October 28, 2019. The consensus estimate is for a loss of $0.05 per share on revenue of $19.30 million. Investor sentiment going into the company's earnings release has 49% expecting an earnings beat. Short interest has increased by 107.7% since the company's last earnings release while the stock has drifted lower by 61.5% from its open following the earnings release to be 56.7% below its 200 day moving average of $5.50. Overall earnings estimates have been revised lower since the company's last earnings release. On Thursday, October 10, 2019 there was some notable buying of 4,144 contracts of the $4.00 call expiring on Friday, February 21, 2020. Option traders are pricing in a 23.1% move on earnings and the stock has averaged a 5.5% move in recent quarters.

(CLICK HERE FOR THE CHART!)

Beyond Meat, Inc. $100.81

Beyond Meat, Inc. (BYND) is confirmed to report earnings at approximately 4:00 PM ET on Monday, October 28, 2019. The consensus earnings estimate is $0.05 per share on revenue of $77.10 million and the Earnings Whisper ® number is $0.06 per share. Investor sentiment going into the company's earnings release has 43% expecting an earnings beat. The stock has drifted lower by 45.9% from its open following the earnings release. Overall earnings estimates have been revised higher since the company's last earnings release. The stock has averaged a 25.8% move on earnings in recent quarters.

(CLICK HERE FOR THE CHART!)

Spotify Technology S.A. $120.69

Spotify Technology S.A. (SPOT) is confirmed to report earnings at approximately 6:00 AM ET on Monday, October 28, 2019. The consensus estimate is for a loss of $0.32 per share on revenue of $1.92 billion and the Earnings Whisper ® number is ($0.36) per share. Investor sentiment going into the company's earnings release has 51% expecting an earnings beat. Consensus estimates are for earnings to decline year-over-year by 186.49% with revenue increasing by 22.15%. Short interest has decreased by 13.8% since the company's last earnings release while the stock has drifted lower by 19.0% from its open following the earnings release to be 11.7% below its 200 day moving average of $136.67. Overall earnings estimates have been revised higher since the company's last earnings release. On Wednesday, October 16, 2019 there was some notable buying of 1,974 contracts of the $109.00 put expiring on Friday, November 1, 2019. Option traders are pricing in a 7.7% move on earnings and the stock has averaged a 3.1% move in recent quarters.

(CLICK HERE FOR THE CHART!)

Alphabet, Inc. -

Alphabet, Inc. (GOOGL) is confirmed to report earnings at approximately 4:00 PM ET on Monday, October 28, 2019. The consensus earnings estimate is $12.57 per share on revenue of $32.71 billion and the Earnings Whisper ® number is $12.94 per share. Investor sentiment going into the company's earnings release has 69% expecting an earnings beat. Consensus estimates are for earnings to decline year-over-year by 3.75% with revenue decreasing by 3.05%. Short interest has decreased by 4.4% since the company's last earnings release while the stock has drifted higher by 3.0% from its open following the earnings release to be 8.3% above its 200 day moving average of $1,167.05. Overall earnings estimates have been revised higher since the company's last earnings release. On Friday, October 18, 2019 there was some notable buying of 1,578 contracts of the $1,200.00 put expiring on Friday, November 15, 2019. Option traders are pricing in a 4.6% move on earnings and the stock has averaged a 4.8% move in recent quarters.

(CLICK HERE FOR THE CHART!)

Mastercard Inc $270.19

Mastercard Inc (MA) is confirmed to report earnings at approximately 7:50 AM ET on Tuesday, October 29, 2019. The consensus earnings estimate is $2.01 per share on revenue of $4.42 billion and the Earnings Whisper ® number is $2.06 per share. Investor sentiment going into the company's earnings release has 76% expecting an earnings beat. Consensus estimates are for year-over-year earnings growth of 12.92% with revenue increasing by 13.39%. Short interest has increased by 11.4% since the company's last earnings release while the stock has drifted lower by 3.3% from its open following the earnings release to be 7.8% above its 200 day moving average of $250.57. Overall earnings estimates have been unchanged since the company's last earnings release. On Wednesday, October 23, 2019 there was some notable buying of 8,143 contracts of the $260.00 call expiring on Friday, November 1, 2019. Option traders are pricing in a 3.9% move on earnings and the stock has averaged a 2.6% move in recent quarters.

(CLICK HERE FOR THE CHART!)

DISCUSS!

What are you all watching for in this upcoming trading week?
I hope you all have a wonderful weekend and a great trading week ahead wallstreetbets.
submitted by bigbear0083 to wallstreetbets [link] [comments]

An electrical engineers opinion on the Librem 5.

Hello everyone. In light of the most recent update, "Supplying the Demand", I would like to share my opinions on the current state of this device.
The following is some basic info of my background. You are free to criticize any and all aspects of this post.
  1. I am an electrical engineer who specializes in digital signal processing (DSP), systems (debug), and comms.
  2. I currently work at a large company that operates in the cell phone industry. My roll is within a 5G research/testing department.
  3. This is my main Reddit account which is reasonably old and active. I typically lurk a lot and rarely post.
  4. My knowledge of programing is very limited. I preform 95% of my job functions with Python and Matlab. This will be a hardware and systems level discussion of the Librem 5.
The CEO of Purism, Mr. Todd Weaver, outlined three major problem areas within the current iteration of the Librem 5: Thermals, Power, and Reception. Let us go through these in order.
=========================================================
Thermals:
Thermals and power are closely intertwined so let's only focus on Purism's options to fix thermals, assuming they make no changes to improve power consumption. Given that the Librem 5 is (thankfully) a thick device, I see no reason why Purism would not be able to fix the thermal issues. In a worst case scenario, they would have to redo the motherboard layout, add some thermal pads/paste, and maybe add a thin yet expensive copper vapor chamber. This would result in a worst case scenario of a possible delay and additional bill of material cost of 20-30 dollars. In my opinion, the thermal problems are solvable and within reach.
Power:
Because of the strict requirements Purism placed on the goals of this device (regarding binary blobs), they have chosen modem(s) that were not designed for this use case. All four variants of the offered modems by both modem vendors (Gemalto and Broadmobi) are internet of things (IOT) class chips. From an EE perspective, these modems are fine in the right context.
Industrial communication with large equipment (shipping yards)?
Great.
Vending machine credit card processing?
Also Great.
A mobile device (UE) that users will be moving around (mobility) and expecting good reception on a strict power budget?
And thus we arrive at the root of the power and reception issues. I am going to talk about reception in it's own section so lets talk power.
The large modem vendors in the smartphone space (Qualcomm, Samsung, Huwawei/HiSillicon, MediaTek, Intel) spend an huge amount of time and effort on power management features. Not only is logic level hardware design done with power in mind, but once the chip is fully taped out, months of effort by 100's of engineers is sunk to improve power characteristics via firmware development and testing. As much as we all hate binary blobs that may (probably) spy on us, these companies have good reason to keep their firmware (and thus power saving IP) secret. Significant competitive advantages are created between the modem vendors from this firmware and digital logic level power savings effort.
When a company markets their modem as "IOT", they are effectively admitting that little to no effort was done to keep chip power in check. In the example IOT applications I mentioned (vending machine's and large industrial equipment), power does not matter. The devices themselves draw far more power than the modem that will be inside. Space is not a concern. So companies making IOT products with these modems simply ignore the power draw and slap on a large heat-sink. From lurking on linux and /Purism , I have seem others call out the modems without going in depth to why these products even exist. Yes, the specifications and capabilities of these modems are far lower. So be it. I think all of use are fine with "100 MBit" peak down-link (reality will be 10-20). The problem is that these chips were not designed for power efficiency and never intended to be in a small compact device. You would not put the engine of a Prius into a flatbed truck. The engineers at Toyota never intended for a Prius engine to go inside such a vehicle. The same situation has happened here.
Now on to how Purism can fix this power problem. With a herculean effort, the firmware developers employed by Purism (and hopefully some community members) can improve power characteristics. I suspect Purism employees have spent most of their time getting the modem firmware and RF-fronted SW into a functional state. There was a blog post somewhere where a Purism employee brought up a call over the air (OTA). I can't find it but that was by far the most important milestone of their effort. Getting past RACH and acquiring a base-station OTA is huge in the industry. The first phase of binary blob development is predominately focused on integrating features while avoiding attach failures and BLER issues. In this first phase, power saving features are typically disabled to make everything else easier to debug. It is safe to say that the Purism employees have neither had the time nor the resources to even start on modem/RF power saving features. Again, in my opinion, the power problem can be solved but this will be a huge massive incredible exhausting undertaking.
Reception:
As I have explained above, IOT-class modems are not designed for, and do not care for certain features. Certain features are really necessary for a regular smartphone (henceforth refereed to as a "UE") to function well. Some examples are:
  1. Mobility. The ability of a UE to switch to new base-stations as the user travels (walking, driving, whatever). This is distinct from the ability of the UE to attach (pass RACH msg 4) to a cell tower from boot or a total signal loss.
  2. Compatibility with all LTE bands. This is why Purism has to support four modems and why you the user will likely to have a somewhat unpleasant time setting things up.
  3. Interoperability testing vs Standards Regression Testing. Suppose that LTE specs can have 1000 different configurations for a cell network and towers within that network. Large modem vendors rigorously test 100's of possible configurations, even if the carriers (Verizon, Sprint, China Mobil, ...) and the base-station vendors (Huwawei, Nokia, Ericsson, ...) only use a few dozen possible configurations. This means that niche bugs are unfortunately likely to show up.
  4. Low-SNR performance. Companies who deploy these modems either place their devices in physical locations that get good SNR (20 dBm ish) or they just attach a giant antenna to get an extra 6-10 dB gain. Users of cellular devices want to still have basic connectivity for voice calls, SMS texts, and notification batches... even if the SNR is bad (1-bar ~= 7 dB SNR; NOTE: EE's use SNR and SINR interchangeably based on background) users still expect basic functionality. IOT modems do not have the hardware blocks to handle low-SNR signals. This is to keep the chip small and cheap. Some DSP tricks like higher order filter banks, over-sampling, and many other linear algebra tricks likely can not run on the modem in real time, rendering them useless. (wireless channel coherence is often quite short)
What concerns me the most is that in the "Supplying the Demand" post, Mr. Weaver only implies that there is a reception issue by very briefly mentioning an "antenna routing" problem. I do not find the claim plausible. UE base-band antennas are typically PIFA, patch, or Log periodic in design. Depending on many factors which are beyond my knowledge, you can get around 6-15 dB of gain from antennas alone. Even though I am a DSP engineer, my job requires me to have a surface level knowledge of antenna radiation patterns. Up front, I can tell you that antenna placement can not and is not a issue. In the Librem 5 batches that do not have metal construction. There should be zero problems. Plastic does not interfere with radio waves enough to cause more than 1-1.5 dB loss in the absolute worst case. In the devices with metal bodies, there should be no issue anyway because of antenna bands. The image I linked is a modern ultra-high end device where you can easily see two thin rectangular plastic antenna bands. There is a reason modern antenna bands are so small: it has become incredibly easy (and thus cheep) to mass produce highly directive antennas. This is especially true for for designs intended for UE's. As a student working in a lab on campus, we had a tight budget and needed to buy antennas for a system we were building. For legal reasons, we were operating on the 1.3 GHz band. Unfortunately, this was impossible because all the "off the shelf" (and very cheap) antennas were designed for various cell phone bands. We ended up ordering a custom design (Gerber files from a fellow student) and fabricated 150 large PIFA antennas for ~$100.
In summary, this large paragraph is a justification for the following strong opinion. I believe there may be serious reception issues with the Liberm 5. These reception issues are not related to antennas. Mr. Weavers in-passing and extremely brief mention of "antenna routing" issues may be the tip for the (reception/SNR) iceberg.
=========================================================
I want to make clear that I do not hold ill will against Purism or FOSS mobile efforts. I absolutely hate that any activity on my smartphone goes directly to Google. For years, I have been holing onto a 100-200 dollar class smartphone because use of said device must be kept to a minimum to protect my privacy (I try to keep all my online activity on a laptop that I control). However, this entire post is an opinionated criticism of Purism's hardware choices. At the end of the day, a cellular device that truly protects your privacy (with potential serious hardware and reception issues) is no different than a Android or iOS phone which has had its antennas and RF cards ripped out. A smartphone is only useful when it can be used. Otherwise, a laptop on a WiFi connection with VoIP (and a VPN) will be objectively more useful.
submitted by parakeetfour to linux [link] [comments]

Volatility 100 Index 3 Odd Pairs Digit Strategy  99% Winning Trick  Binary Option Strategy Binary Option Real Account Tick Trading  99% Winning Trick  Binary Option Strategy Awesome binary Option 7 digit strategy  No Loss 99% Winning Ratio  Binary Option Trading MAKE MONEY Binary Option Strategy  Volatility 100 Index 3 Odd Pairs Digit Strategy  FREE 2020 BINARY.COM TRADING DIGIT DIFFERS-THE BEST STRATEGY

There are 6 significant digits. The zeros are all between significant digits. 2) 10.007500 There are 8 significant digits. In this case the trailing zeros are to the right of the decimal point. 3) 0.0075 There are 2 significant digits. The zeros shown are only place holders. 4) 5000 There is only 1 significant digit. The zeros are place holders. An 8-bit piece of computer data is often called a "byte" and is represented by an 8-digit binary number. The values of these binary place holders are as follows (with the "most significant digit" on the left hand side, and the "least significant digit" on the right had side -- just like decimal numbers): Each byte will accommodate two binary coded decimal (BCD) digits. The most significant digit (MSD) position will be in bit numbers 0, 1, 2, and 3, with respective decimal values of 8, 4, 2, and 1. The least-significant (LSD) position will be in bit numbers 4, 5, 6 and 7, with respective decimal values of 8, 4, 2, and 1. So, when people use computers (which prefer binary numbers), it is a lot easier to use the single hexadecimal digit rather than 4 binary digits. For example, the binary number "100110110100" is "9B4" in hexadecimal. I know which I would prefer to write! The benefits of blockchain and the advantages that global fintech trends exert significant influence on the binary options trade. Source: Statista. To match you with the best binary options broker for your needs, we’ll take you on a tour of the top binary options brokers today. Our analysis of each broker lays out the most important features

[index] [21421] [28794] [17945] [1378] [10751] [27611] [18909] [19087] [20557] [15765]

Volatility 100 Index 3 Odd Pairs Digit Strategy 99% Winning Trick Binary Option Strategy

This bot is designed with a very smart algorithm. It can have the most winrate in binary option. Check the last numbers and select the best number for the Prediction Digit Match . Digits Under Binary Options Trading Winning Strategy 2019 Binary.com Guaranteed earning method $12 - Duration: 8:40. Binary Options Forex Trading Make Money Online Tips 6,998 views 8:40 Binary Options Strategy Binary Options Method Binary Options Signals ... Binary Digit Match, Trick Digits Binary, Trick Digits Matches, Trick Digits Differs, Trick Digits Match, This bot can win trade with systematic digit strategy and secure money management. If wins, then it will modify the tick duration in the next trade. ... Best Binary Options Trading Strategy 99% ... Binary Options Strategy Binary Options Method Binary Options Signals ... Binary Digit Match, Trick Digits Binary, Trick Digits Matches, Trick Digits Differs, Trick Digits Match,

Flag Counter