Warning: SQLite3::querySingle(): Unable to prepare statement: 1, no such table: sites in /home/admin/web/local.example.com/public_html/index.php on line 46
 IQ Option • Binary Options

IQ Option • Binary Options

The official placiibo app subreddit!

The current purpose of this subreddit is to discuss the developments of the placiibo TestFlight and gathering beta feedback.
[link]

A place to help anyone who has a uterus

This sub is dedicated to providing information and resources to those in need of services in states that have passed the heartbeat bill. Please read the info in the sidebar 💖
[link]

Lee Elbaz Has So Many Binary Options Fraud Victims That the US Government Needs to Notify Them En Masse

While following our updates, you have probably read up on the case against Lee Elbaz, a binary options fraudster from Israel. Elbaz ran a company called Yukom Communications, and scammed investors through the BigOption and BinaryBook websites.
While there are a number of binary options cases which are ongoing right now, this is one of the biggest. In fact, it is so big that the United States government asked the Court on October 15 if it could notify Elbaz’s victims publicly instead of privately.
Normally, victims in cases like these are contacted one by one, but as there are thousands of people who may have been scammed, this would not be a practical way to handle the notifications. If the motion to notify the victims publicly en masse is passed, the Department of Justice will do so at the URL: https://www.justice.gov/criminal-vns/case/lee-elbaz.
In the meantime, be sure to keep up with developments in USA v. Elbaz.
submitted by enivid to binaryoptions [link] [comments]

Binary options industry in the last 2 years has evolved a great deal. Currently there are more than 600 brokers worldwide and it is difficult to choose a safe brokerage. binaryswap.com also tried brokers we offer you to compare post-op. https://binaryswap.com/en/

submitted by binaryswapcom to BinaryOptions_2016 [link] [comments]

VALORANT Patch Notes 1.09

VALORANT Patch Notes 1.09

Visualization of changes
Riot KOREA official breakdown video w/ English Subtitles

AGENT UPDATES

OMEN

Paranoia
We’re keeping an eye on the overall power level of Paranoia, but as a first step wanted to resolve visual issues where players hit with Paranoia appear outside of its impact on their screen.

JETT

Blade Storm (Burst Fire)
While we continue to investigate some of her outsized strengths, we think the burst fire on her Blade Storm has been over-performing at long ranges. The burst fire is intended to be a close range attack, but we found it frequently getting frags at over 20 meters away. These changes aim to rein in its effective range while retaining its close range potency.

VIPER

Fuel
Now that Viper can place her wall pre-round, we want her to be able to act with her team right at barrier drop without the tension of also trying to maximize her fuel for an initial move.
Snake Bite
Immediately dropping the vulnerable debuff upon exit wasn’t creating the threat we’ve hoped for when we added it. This change should make the Viper (and team) advantage window more realistic, as well as project a unique threat on opponents playing around it.
Viper’s Pit
The combination of a slow placement and re-equip time was resulting in Viper players getting too hurt or killed while casting ults in a situation we felt should be pretty safe. This change should increase the positional options available while casting, and get your weapon up sooner.

REYNA

Empress
At its previous fire rate, we felt Empress was too effective when using heavies/smgs, AND too fast to master the change in spray pattern on rifles. We hope this change allows us to address both issues at once, while also giving us a chance to have a unified fire rate increase (matching Brimstone’s stim below) that players can learn and master.

BRIMSTONE

Stim Beacon
Paired with Reyna’s change (above), we felt Brimstone’s stim could use a little more punch. This also unifies our two fire rate increase buffs, making them easier to learn.

WEAPON UPDATES

Operator

All Weapons

COMPETITIVE UPDATES

This is already a very rare occurrence, but it can happen more often for high rank players—especially in premade groups. We are also doing some tuning behind the scenes to keep high rank matches found after long queue times reasonably balanced and fair.

SOCIAL UPDATES

Players that have been reported for inappropriate Riot IDs will now be reviewed automatically after the match has ended. If their name is flagged as inappropriate, they will be forced to change their Riot ID the next time they log in to the Riot Client.
Some sneaky people were impersonating system messages to troll others into quitting a match. Enough!
Sorting algorithm for the social panel has been updated to make it more intuitive for players as they interact with it.

BUG FIXES

submitted by MentallyStableMan to ValorantCompetitive [link] [comments]

Facebook Connect / Quest 2 - Speculations Megathread

EDIT: MAJOR UPDATE AT BOTTOM
Welcome to the "Speculations" mega thread for the device possibly upcoming in the Oculus Quest line-up. This thread will be a compilation of leaks, speculation & rumors updated as new information comes out.
Let's have some fun and go over some of the leaks, rumors, speculation all upcoming before Facebook Connect, we'll have a full mega thread going during Connect, but this should be a great thread for remembrance afterward.
Facebook Connect is happening September 16th at 10 AM PST, more information can be found here.

Leaks
In March, Facebook’s public Developer Documentation website started displaying a new device called ‘Del Mar’, with a ‘First Access’ program for developers.
In May, we got the speculated specs, based off the May Bloomberg Report (Original Paywall Link)
• “at least 90Hz” refresh rate
• 10% to 15% smaller than the current Quest
• around 20% lighter
• “the removal of the fabric from the sides and replacing it with more plastic”
• “changing the materials used in the straps to be more elastic than the rubber and velcro currently used”
• “a redesigned controller that is more comfortable and fixes a problem with the existing controller”

On top of that, the "Jedi Controller" drivers leaked, which are now assumed to be V3 Touch Controllers for the upcoming device.
The IMUs seem significantly improved & the reference to a 60Hz (vs 30hz) also seems to imply improved tracking.
It's also said to perhaps have improved haptics & analog finger sensing instead of binary/digital.
Now as of more recent months, we had the below leaks.
Render (1), (2)
Walking Cat seems to believe the device is called "Quest 2", unfortunately since then, his twitter has been taken down.
Real-life pre-release model photos
Possible IPD Adjustment
From these photos and details we can discern that:
Further features speculation based on firmware digging (thanks Reggy04 from the VR Discord for quite a few of these), as well as other sources, all linked.

Additional Sources: 1/2/3/4
Headset Codenames
We've seen a few codenames going around at this point, Reggy04 provided this screenshot that shows the following new codenames.
Pricing Rumors
So far, the most prevalent pricing we've seen is 299 for 64gb, and 399 for 256GB
These were shown by a Walmart page for Point Reyes with a release date of September 16 and a Target price leak with a street date of October 13th

Speculation
What is this headset?
Speculation so far is this headset is a Quest S or Quest 2
OR
This is a flat-out cheaper-to-manufacture, small upgrade to the Oculus Quest to keep up with demand and to iterate the design slowly.
Again, This is all speculation, nothing is confirmed or set in stone.
What do you think this is and what we'll see at FB Connect? Let's talk!
Rather chat live? Join us on the VR Discord
EDIT: MAJOR UPDATE - Leaked Videos.
6GB of RAM, XR2 Platform, "almost 4k display" (nearly 2k per eye) Source
I am mirroring all the videos in case they get pulled down.
Mirrors: Oculus Hand Tracking , Oculus Casting, Health and Safety, Quest 2 Instructions, Inside the Upgrade
submitted by charliefrench2oo8 to OculusQuest [link] [comments]

Trial Results: Should we suspend the Simple Questions rule?

tl;dr: probably not, no.

On July 28th, 2020, MFA_Nay asked openly for feedback on the "Simple Questions Rule" and how we as moderators handle content curation on the sub. We were responding feedback of many users who were concerned that the sub was stale and/or they were unable to "ask for advice" on the front page. It's been a while since we've considered test-running the counterfactual to our norm, so it seemed appropriate to give it a shot.
On July 31st, 2020, we officially suspended the rule. We were originally considering doing a month-long trial, but came to the conclusion (based on rapid and strong feedback from many regulars, combined with exasperation on our end) that a week would be quite sufficient.
On August 7th, 2020, we wrapped up the trial and solicited feedback in both comment and survey form.
I don't intend to put too many personal takes up here, and will add them down below as a top level comment, but I did promise you some data and figures, so I will deliver those here.
When evaluating the questionnaire, the goal was to find patterns that would pass reasonable muster. The data are too small and perhaps too biased* for any real power (*though there's a case to be made that the respondents are sufficiently representative of the population we wanted the most feedback from; never underestimate the convenience sample).
Looking at respondents' primary reasons for being on MFA, it seems like most people are here to lurk. I'm surprised by how many people said their reason here is "To Give Advice", and wonder if the question should have instead been split into a binary by combining "To Lurk" and "To Get Advice".
Looking at respondents' time on MFA, the mean/median/mode have been here a couple of years (2-4), and barely 15% report being here less than a year.
Most respondents preferred heavier curation (mean 7.3), and their curation preferences were not associated with their tenure here.
Looking beyond the survey at comment and post frequency, you can see that the volume of posts is driven almost entirely by questions and the comment volume is slowly decreasing but steady.
What does this mean? Well, probably not much. Allowing simple questions does not drastically change the traffic, nor does it seem to make regulars (read: the content creators here) very happy. Multiple have expressed in comments and messages that that they are less inclined to create content, and on a 5-option Likert scale question asking "Did allowing Simple Questions on the frontpage make you more or less likely to create content?", half said "Much less likely", less than 10% said "Somewhat more likely", and I didn't even realize there was a fifth option until I was rereading the question and typing this post, because no one put "Much more likely".
submitted by zacheadams to malefashionadvice [link] [comments]

An introduction to Linux through Windows Subsystem for Linux

I'm working as an Undergraduate Learning Assistant and wrote this guide to help out students who were in the same boat I was in when I first took my university's intro to computer science course. It provides an overview of how to get started using Linux, guides you through setting up Windows Subsystem for Linux to run smoothly on Windows 10, and provides a very basic introduction to Linux. Students seemed to dig it, so I figured it'd help some people in here as well. I've never posted here before, so apologies if I'm unknowingly violating subreddit rules.

An introduction to Linux through Windows Subsystem for Linux

GitHub Pages link

Introduction and motivation

tl;dr skip to next section
So you're thinking of installing a Linux distribution, and are unsure where to start. Or you're an unfortunate soul using Windows 10 in CPSC 201. Either way, this guide is for you. In this section I'll give a very basic intro to some of options you've got at your disposal, and explain why I chose Windows Subsystem for Linux among them. All of these have plenty of documentation online so Google if in doubt.

Setting up WSL

So if you've read this far I've convinced you to use WSL. Let's get started with setting it up. The very basics are outlined in Microsoft's guide here, I'll be covering what they talk about and diving into some other stuff.

1. Installing WSL

Press the Windows key (henceforth Winkey) and type in PowerShell. Right-click the icon and select run as administrator. Next, paste in this command:
dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart 
Now you'll want to perform a hard shutdown on your computer. This can become unecessarily complicated because of Window's fast startup feature, but here we go. First try pressing the Winkey, clicking on the power icon, and selecting Shut Down while holding down the shift key. Let go of the shift key and the mouse, and let it shutdown. Great! Now open up Command Prompt and type in
wsl --help 
If you get a large text output, WSL has been successfully enabled on your machine. If nothing happens, your computer failed at performing a hard shutdown, in which case you can try the age-old technique of just holding down your computer's power button until the computer turns itself off. Make sure you don't have any unsaved documents open when you do this.

2. Installing Ubuntu

Great! Now that you've got WSL installed, let's download a Linux distro. Press the Winkey and type in Microsoft Store. Now use the store's search icon and type in Ubuntu. Ubuntu is a Debian-based Linux distribution, and seems to have the best integration with WSL, so that's what we'll be going for. If you want to be quirky, here are some other options. Once you type in Ubuntu three options should pop up: Ubuntu, Ubuntu 20.04 LTS, and Ubuntu 18.04 LTS.
![Windows Store](https://theshepord.github.io/intro-to-WSL/docs/images/winstore.png) Installing plain-old "Ubuntu" will mean the app updates whenever a new major Ubuntu distribution is released. The current version (as of 09/02/2020) is Ubuntu 20.04.1 LTS. The other two are older distributions of Ubuntu. For most use-cases, i.e. unless you're running some software that will break when upgrading, you'll want to pick the regular Ubuntu option. That's what I did.
Once that's done installing, again hit Winkey and open up Ubuntu. A console window should open up, asking you to wait a minute or two for files to de-compress and be stored on your PC. All future launches should take less than a second. It'll then prompt you to create a username and password. I'd recommend sticking to whatever your Windows username and password is so that you don't have to juggle around two different usepassword combinations, but up to you.
Finally, to upgrade all your packages, type in
sudo apt-get update 
And then
sudo apt-get upgrade 
apt-get is the Ubuntu package manager, this is what you'll be using to install additional programs on WSL.

3. Making things nice and crispy: an introduction to UNIX-based filesystems

tl;dr skip to the next section
The two above steps are technically all you need for running WSL on your system. However, you may notice that whenever you open up the Ubuntu app your current folder seems to be completely random. If you type in pwd (for Print Working Directory, 'directory' is synonymous with 'folder') inside Ubuntu and hit enter, you'll likely get some output akin to /home/. Where is this folder? Is it my home folder? Type in ls (for LiSt) to see what files are in this folder. Probably you won't get any output, because surprise surprise this folder is not your Windows home folder and is in fact empty (okay it's actually not empty, which we'll see in a bit. If you type in ls -a, a for All, you'll see other files but notice they have a period in front of them. This is a convention for specifying files that should be hidden by default, and ls, as well as most other commands, will honor this convention. Anyways).
So where is my Windows home folder? Is WSL completely separate from Windows? Nope! This is Windows Subsystem for Linux after all. Notice how, when you typed pwd earlier, the address you got was /home/. Notice that forward-slash right before home. That forward-slash indicates the root directory (not to be confused with the /root directory), which is the directory at the top of the directory hierarchy and contains all other directories in your system. So if we type ls /, you'll see what are the top-most directories in your system. Okay, great. They have a bunch of seemingly random names. Except, shocker, they aren't random. I've provided a quick run-down in Appendix A.
For now, though, we'll focus on /mnt, which stands for mount. This is where your C drive, which contains all your Windows stuff, is mounted. So if you type ls /mnt/c, you'll begin to notice some familiar folders. Type in ls /mnt/c/Users, and voilà, there's your Windows home folder. Remember this filepath, /mnt/c/Users/. When we open up Ubuntu, we don't want it tossing us in this random /home/ directory, we want our Windows home folder. Let's change that!

4. Changing your default home folder

Type in sudo vim /etc/passwd. You'll likely be prompted for your Ubuntu's password. sudo is a command that gives you root privileges in bash (akin to Windows's right-click then selecting 'Run as administrator'). vim is a command-line text-editing tool, which out-of-the-box functions kind of like a crummy Notepad (you can customize it infinitely though, and some people have insane vim setups. Appendix B has more info). /etc/passwd is a plaintext file that historically was used to store passwords back when encryption wasn't a big deal, but now instead stores essential user info used every time you open up WSL.
Anyway, once you've typed that in, your shell should look something like this: ![vim /etc/passwd](https://theshepord.github.io/intro-to-WSL/docs/images/vim-etc-passwd.png)
Using arrow-keys, find the entry that begins with your Ubuntu username. It should be towards the bottom of the file. In my case, the line looks like
theshep:x:1000:1000:,,,:/home/pizzatron3000:/bin/bash 
See that cringy, crummy /home/pizzatron3000? Not only do I regret that username to this day, it's also not where we want our home directory. Let's change that! Press i to initiate vim's -- INSERT -- mode. Use arrow-keys to navigate to that section, and delete /home/ by holding down backspace. Remember that filepath I asked you to remember? /mnt/c/Users/. Type that in. For me, the line now looks like
theshep:x:1000:1000:,,,:/mnt/c/Users/lucas:/bin/bash 
Next, press esc to exit insert mode, then type in the following:
:wq 
The : tells vim you're inputting a command, w means write, and q means quit. If you've screwed up any of the above sections, you can also type in :q! to exit vim without saving the file. Just remember to exit insert mode by pressing esc before inputting commands, else you'll instead be writing to the file.
Great! If you now open up a new terminal and type in pwd, you should be in your Window's home folder! However, things seem to be lacking their usual color...

5. Importing your configuration files into the new home directory

Your home folder contains all your Ubuntu and bash configuration files. However, since we just changed the home folder to your Window's home folder, we've lost these configuration files. Let's bring them back! These configuration files are hidden inside /home/, and they all start with a . in front of the filename. So let's copy them over into your new home directory! Type in the following:
cp -r /home//. ~ 
cp stands for CoPy, -r stands for recursive (i.e. descend into directories), the . at the end is cp-specific syntax that lets it copy anything, including hidden files, and the ~ is a quick way of writing your home directory's filepath (which would be /mnt/c/Users/) without having to type all that in again. Once you've run this, all your configuration files should now be present in your new home directory. Configuration files like .bashrc, .profile, and .bash_profile essentially provide commands that are run whenever you open a new shell. So now, if you open a new shell, everything should be working normally. Amazing. We're done!

6. Tips & tricks

Here are two handy commands you can add to your .profile file. Run vim ~/.profile, then, type these in at the top of the .profile file, one per line, using the commands we discussed previously (i to enter insert mode, esc to exit insert mode, :wq to save and quit).
alias rm='rm -i' makes it so that the rm command will always ask for confirmation when you're deleting a file. rm, for ReMove, is like a Windows delete except literally permanent and you will lose that data for good, so it's nice to have this extra safeguard. You can type rm -f to bypass. Linux can be super powerful, but with great power comes great responsibility. NEVER NEVER NEVER type in rm -rf /, this is saying 'delete literally everything and don't ask for confirmation', your computer will die. Newer versions of rm fail when you type this in, but don't push your luck. You've been warned. Be careful.
export DISPLAY=:0 if you install XLaunch VcXsrv, this line allows you to open graphical interfaces through Ubuntu. The export sets the environment variable DISPLAY, and the :0 tells Ubuntu that it should use the localhost display.

Appendix A: brief intro to top-level UNIX directories

tl;dr only mess with /mnt, /home, and maybe maybe /usr. Don't touch anything else.
  • bin: binaries, contains Ubuntu binary (aka executable) files that are used in bash. Here you'll find the binaries that execute commands like ls and pwd. Similar to /usbin, but bin gets loaded earlier in the booting process so it contains the most important commands.
  • boot: contains information for operating system booting. Empty in WSL, because WSL isn't an operating system.
  • dev: devices, provides files that allow Ubuntu to communicate with I/O devices. One useful file here is /dev/null, which is basically an information black hole that automatically deletes any data you pass it.
  • etc: no idea why it's called etc, but it contains system-wide configuration files
  • home: equivalent to Window's C:/Users folder, contains home folders for the different users. In an Ubuntu system, under /home/ you'd find the Documents folder, Downloads folder, etc.
  • lib: libraries used by the system
  • lib64 64-bit libraries used by the system
  • mnt: mount, where your drives are located
  • opt: third-party applications that (usually) don't have any dependencies outside the scope of their own package
  • proc: process information, contains runtime information about your system (e.g. memory, mounted devices, hardware configurations, etc)
  • run: directory for programs to store runtime information.
  • srv: server folder, holds data to be served in protocols like ftp, www, cvs, and others
  • sys: system, provides information about different I/O devices to the Linux Kernel. If dev files allows you to access I/O devices, sys files tells you information about these devices.
  • tmp: temporary, these are system runtime files that are (in most Linux distros) cleared out after every reboot. It's also sort of deprecated for security reasons, and programs will generally prefer to use run.
  • usr: contains additional UNIX commands, header files for compiling C programs, among other things. Kind of like bin but for less important programs. Most of everything you install using apt-get ends up here.
  • var: variable, contains variable data such as logs, databases, e-mail etc, but that persist across different boots.
Also keep in mind that all of this is just convention. No Linux distribution needs to follow this file structure, and in fact almost all will deviate from what I just described. Hell, you could make your own Linux fork where /mnt/c information is stored in tmp.

Appendix B: random resources

EDIT: implemented various changes suggested in the comments. Thanks all!
submitted by HeavenBuilder to linux4noobs [link] [comments]

I created a mathematically optimal team generator!

Hi all,

I've been playing FPL for a few years now, and by no means am I an expert. However, I like math and particularly optimization problems. And a few days ago I thought to use my math knowledge for something useful.

My goal was to start from some metric that predicts the amount of points a player will score (either in the next gameweek, or over the whole season). From that metric, I wanted to generate the mathematically optimal team, aka choose the 15 players that will give me the most points, while staying within budget. I realized this is a constrained knapsack problem, which can be solved by dedicated solvers as long as the optimization problem is properly defined. Note that while I make a big assumption by choosing some metric from which I start, the solver actually finds the most optimal team, without any prior assumptions about best formation, budget spread, etc!

(Warning: from this point onward it gets kinda math-y, so turn back or skip ahead to the results if that's not your thing)

MATH

So first, the optimization variable needed to be defined. For this purpose I introduced a binary variable x which is basically a vector of all players in the game, where a value of 1 indicates that player is part of our dream team and a 0 means it's not.

Secondly, an objective function needs to be defined, which is what we want to maximize. In our case, this is the total expected points our dreamteam will score. I included double captain points and reduced points for bench players here. The objective function is linear, which is nice since it is convex (an important property which makes solving the problem much easier, and is even required for most solvers).

Lastly are the constraints. Obviously, there is the 100M budget constraint. Then we also want the required amount of goalkeepers, defenders, midfielders and forwards. Then we need to keep in mind the formation constraints, and lastly are the max 3 players per club constraints. Luckily, these are all linear (so convex) constraints.

I solved this problem using CVX for MATLAB, particularly with the Gurobi solver since it allows mixed integer programs. It tries to find the optimal variable x* which maximizes the objective function while staying within the constraints. And amazingly, it actually comes up with solutions!

RESULTS
So like I said before, I need to start from some metric that indicates how many points a player will score (if you have any recommendations, let me know!). For a lack of better options, I chose two different metrics:

  1. The total points scored by the player last year
  2. The expected points scored by the player in the next gameweek (ep_next in the FPL API, for fellow nerds)

Obviously, both metrics are not perfect. The first one doesn't take into account transfers, promoted teams, injuries, fixtures, position changes etc. However, it should work decent for making a set-and-forget team with proven PL players.

The second metric seems to have a problem with overrating bench players of top PL teams such as Ozil, Minamino, etc. I'm not really sure why, but it's a metric taken directly from FPL with undisclosed underlying math so it's not my problem. Also, keep in mind that since the first gameweek does not feature City/Utd/Burnley/Villa players, this metric predicts them to score 0 points so they won't feature in the optimal team.

Team 1: Last year's dreamteam
Bench:

Team 2: Next week's dreamteam
Bench:

Both teams cost exactly 100M.

At first glance, there are some obvious flaws with both teams, but most of them are because the metric used as input is flawed, as I explained before. Lundstram is obviously a much worse choice this year due to various reasons, and Team 2 has some top 6 players which are very much not nailed.

However. What I think is interesting is that both teams have only 2 starting midfielders. This despite the trend of people stacking premium midfielders. On the other hand, premium defenders seem to be very good value, and the importance of TAA and Robertson is underlined. Similarly, near-premium forwards in the 7.5-10 price range seem to be a good choice.

CONCLUSION
I'm quite content with my optimal team generator. Using it, I don't need to use vague value metrics such as VAPM. The input can be any metric which relates simply to how many points a player will score. Choices about relative value of e.g. defenders against midfielders, formation, budget spread etc. are all taken out of my hands with this team generator. The team that is generated is only as good as the metric used as input. But given a certain input metric, you can be sure that the generated team is optimal.

I would gladly share my MATLAB code if there is any interest. Also, I'm open to suggestions on how to extend it. EDIT: Here it is.


(Tiny disclaimer: Remember when I said: "without any prior assumptions"? That is a lie. There is one tiny assumption I made, which is how often bench players are subbed on. I guesstimated this to happen approximately 10% of the time.)
submitted by nectri42 to FantasyPL [link] [comments]

Retard Bot Update 2: What is there to show for six months of work?

Retard Bot Update 2: What is there to show for six months of work?
What is there to show? Not shit, that's why I made this pretty 4K desktop background instead:
4K
On the real: I've been developing this project like 6 months now, what's up? Where's that video update I promised, showing off the Bot Builder? Is an end in sight?
Yes sort of. I back-tested 6 months of data at over 21% on a net SPY-neutral, sixteen year span of time including 2 bear, 2 bull, 2 crab months. But that's not good enough to be sure / reliable. I had gotten so focused on keeping the project pretty and making a video update that I was putting off major, breaking changes that I needed to make. The best quant fund ever made, the Medallion fund, was once capable of roughly 60% per year consistently, but in Retard Bot's case 1.5% compounded weekly. "But I make 60% on one yolo" sure whatever, can you do it again every year, with 100% of your capital, where failure means losing everything? If you could, you'd be loading your Lambo onto your Yacht right now instead of reading this autistic shit.

The End Goal

1.5% compounded weekly average is $25K -> $57M in 10 years, securing a fairly comfortable retirement for your wife's boyfriend. It's a stupidly ambitious goal. My strategy to pull it off is actually pretty simple. If you look at charts for the best performing stocks over the past 10 years, you'll find that good companies move in the same general trajectory more often than they don't. This means the stock market moves with momentum. I developed a simple equation to conservatively predict good companies movements one week into the future by hand, and made 100%+ returns 3 weeks in a row. Doing the math took time, and I realized a computer could do much more complex math, on every stock, much more efficiently, so I developed a bot and it did 100% for 3 consecutive weeks, buying calls in a bull-market.
See the problem there? The returns were good but they were based on a biased model. The model would pick the most efficient plays on the market if it didn't take a severe downturn. But if it did, the strategy would stop working. I needed to extrapolate my strategy into a multi-model approach that could profit on momentum during all different types of market movement. And so I bought 16 years of option chain data and started studying the concept of momentum based quantitative analysis. As I spent more and more weeks thinking about it, I identified more aspects of the problem and more ways to solve it. But no matter how I might think to design algorithms to fundamentally achieve a quantitative approach, I knew that my arbitrary weights and variables and values and decisions could not possibly be the best ones.

Why Retard Bot Might Work

So I approached the problem from all angles, every conceivable way to glean reliably useful quantitative information about a stock's movement and combine it all into a single outcome of trade decisions, and every variable, every decision, every model was a fluid variable that machine learning, via the process of Evolution could randomly mutate until perfection. And in doing so, I had to fundamentally avoid any method of testing my results that could be based on a bias. For example, just because a strategy back-tests at 40% consistent yearly returns on the past 16 years of market movement doesn't mean it would do so for the next 16 years, since the market could completely end its bull-run and spend the next 16 years falling. Improbable, but for a strategy outcome that can be trusted to perform consistently, we have to assume nothing.
So that's how Retard Bot works. It assumes absolutely nothing about anything that can't be proven as a fundamental, statistical truth. It uses rigorous machine learning to develop fundamental concepts into reliable, fine tuned decision layers that make models which are controlled by a market-environment-aware Genius layer that allocates resources accordingly, and ultimately through a very complex 18 step process of iterative ML produces a top contender through the process of Evolution, avoiding all possible bias. And then it starts over and does it again, and again, continuing for eternity, recording improved models when it discovers them.

The Current Development Phase

Or... That's how it would work, in theory, if my program wasn't severely limited by the inadequate infrastructure I built it with. When I bought 16 years of data, 2TB compressed to its most efficient binary representation, I thought I could use a traditional database like MongoDB to store and load the option chains. It's way too slow. So here's where I've ended up this past week:
It was time to rip off the bandaid and rebuild some performance infrastructure (the database and decision stack) that was seriously holding me back from testing the project properly. Using MongoDB, which has to pack and unpack data up and down the 7 layer OSI model, it took an hour to test one model for one year. I need to test millions of models for 16 years, thousands of times over.
I knew how to do that, so instead of focusing on keeping things stable so I could show you guys some pretty graphs n shit, I broke down the beast and started rebuilding with a pure memory caching approach that will load the options chains thousands of times faster than MongoDB queries. And instead of running one model, one decision layer at a time on the CPU, the new GPU accelerated decision stack design will let me run hundreds of decision layers on millions of models in a handful of milliseconds. Many, many orders of magnitude better performance, and I can finally make the project as powerful as it was supposed to be.
I'm confident that with these upgrades, I'll be able to hit the goal of 60% consistent returns per year. I'll work this goddamn problem for a year if I have to. I have, in the process of trying to become an entrepreneur, planned project after project and given up half way through when it got too hard, or a partner quit, or someone else launched something better. I will not give up on this one, if it takes the rest of the year or five more.
But I don't think it'll come to that. Even with the 20% I've already achieved, if I can demonstrate that in live trading, that's already really good, so there's not really any risk of real failure at this point. But I will, regardless, finish developing the vision I have for Retard Bot and Bidrate Renaissance before I'm satisfied.

Tl;Dr

https://preview.redd.it/0plnnpkw5um51.png?width=3840&format=png&auto=webp&s=338edc893f4faadffabb5418772c9b250f488336
submitted by o_ohi to retard_bot [link] [comments]

Beginner's critiques of Rust

Hey all. I've been a Java/C#/Python dev for a number of years. I noticed Rust topping the StackOverflow most loved language list earlier this year, and I've been hearing good things about Rust's memory model and "free" concurrency for awhile. When it recently came time to rewrite one of my projects as a small webservice, it seemed like the perfect time to learn Rust.
I've been at this for about a month and so far I'm not understanding the love at all. I haven't spent this much time fighting a language in awhile. I'll keep the frustration to myself, but I do have a number of critiques I wouldn't mind discussing. Perhaps my perspective as a beginner will be helpful to someone. Hopefully someone else has faced some of the same issues and can explain why the language is still worthwhile.
Fwiw - I'm going to make a lot of comparisons to the languages I'm comfortable with. I'm not attempting to make a value comparison of the languages themselves, but simply comparing workflows I like with workflows I find frustrating or counterintuitive.
Docs
When I have a question about a language feature in C# or Python, I go look at the official language documentation. Python in particular does a really nice job of breaking down what a class is designed to do and how to do it. Rust's standard docs are little more than Javadocs with extremely minimal examples. There are more examples in the Rust Book, but these too are super simplified. Anything more significant requires research on third-party sites like StackOverflow, and Rust is too new to have a lot of content there yet.
It took me a week and a half of fighting the borrow checker to realize that HashMap.get_mut() was not the correct way to get and modify a map entry whose value was a non-primitive object. Nothing in the official docs suggested this, and I was actually on the verge of quitting the language over this until someone linked Tour of Rust, which did have a useful map example, in a Reddit comment. (If any other poor soul stumbles across this - you need HashMap.entry().or_insert(), and you modify the resulting entry in place using *my_entry.value = whatever. The borrow checker doesn't allow getting the entry, modifying it, and putting it back in the map.)
Pit of Success/Failure
C# has the concept of a pit of success: the most natural thing to do should be the correct thing to do. It should be easy to succeed and hard to fail.
Rust takes the opposite approach: every natural thing to do is a landmine. Option.unwrap() can and will terminate my program. String.len() sets me up for a crash when I try to do character processing because what I actually want is String.chars.count(). HashMap.get_mut() is only viable if I know ahead of time that the entry I want is already in the map, because HashMap.get_mut().unwrap_or() is a snake pit and simply calling get_mut() is apparently enough for the borrow checker to think the map is mutated, so reinserting the map entry afterward causes a borrow error. If-else statements aren't idiomatic. Neither is return.
Language philosophy
Python has the saying "we're all adults here." Nothing is truly private and devs are expected to be competent enough to know what they should and shouldn't modify. It's possible to monkey patch (overwrite) pretty much anything, including standard functions. The sky's the limit.
C# has visibility modifiers and the concept of sealing classes to prevent further extension or modification. You can get away with a lot of stuff using inheritance or even extension methods to tack on functionality to existing classes, but if the original dev wanted something to be private, it's (almost) guaranteed to be. (Reflection is still a thing, it's just understood to be dangerous territory a la Python's monkey patching.) This is pretty much "we're all professionals here"; I'm trusted to do my job but I'm not trusted with the keys to the nukes.
Rust doesn't let me so much as reference a variable twice in the same method. This is the functional equivalent of being put in a straitjacket because I can't be trusted to not hurt myself. It also means I can't do anything.
The borrow checker
This thing is legendary. I don't understand how it's smart enough to theoretically track data usage across threads, yet dumb enough to complain about variables which are only modified inside a single method. Worse still, it likes to complain about variables which aren't even modified.
Here's a fun example. I do the same assignment twice (in a real-world context, there are operations that don't matter in between.) This is apparently illegal unless Rust can move the value on the right-hand side of the assignment, even though the second assignment is technically a no-op.
//let Demo be any struct that doesn't implement Copy. let mut demo_object: Option = None; let demo_object_2: Demo = Demo::new(1, 2, 3); demo_object = Some(demo_object_2); demo_object = Some(demo_object_2); 
Querying an Option's inner value via .unwrap and querying it again via .is_none is also illegal, because .unwrap seems to move the value even if no mutations take place and the variable is immutable:
let demo_collection: Vec = Vec::::new(); let demo_object: Option = None; for collection_item in demo_collection { if demo_object.is_none() { } if collection_item.value1 > demo_object.unwrap().value1 { } } 
And of course, the HashMap example I mentioned earlier, in which calling get_mut apparently counts as mutating the map, regardless of whether the map contains the key being queried or not:
let mut demo_collection: HashMap = HashMap::::new(); demo_collection.insert(1, Demo::new(1, 2, 3)); let mut demo_entry = demo_collection.get_mut(&57); let mut demo_value: &mut Demo; //we can't call .get_mut.unwrap_or, because we can't construct the default //value in-place. We'd have to return a reference to the newly constructed //default value, which would become invalid immediately. Instead we get to //do things the long way. let mut default_value: Demo = Demo::new(2, 4, 6); if demo_entry.is_some() { demo_value = demo_entry.unwrap(); } else { demo_value = &mut default_value; } demo_collection.insert(1, *demo_value); 
None of this code is especially remarkable or dangerous, but the borrow checker seems absolutely determined to save me from myself. In a lot of cases, I end up writing code which is a lot more verbose than the equivalent Python or C# just trying to work around the borrow checker.
This is rather tongue-in-cheek, because I understand the borrow checker is integral to what makes Rust tick, but I think I'd enjoy this language a lot more without it.
Exceptions
I can't emphasize this one enough, because it's terrifying. The language flat up encourages terminating the program in the event of some unexpected error happening, forcing me to predict every possible execution path ahead of time. There is no forgiveness in the form of try-catch. The best I get is Option or Result, and nobody is required to use them. This puts me at the mercy of every single crate developer for every single crate I'm forced to use. If even one of them decides a specific input should cause a panic, I have to sit and watch my program crash.
Something like this came up in a Python program I was working on a few days ago - a web-facing third-party library didn't handle a web-related exception and it bubbled up to my program. I just added another except clause to the try-except I already had wrapped around that library call and that took care of the issue. In Rust, I'd have to find a whole new crate because I have no ability to stop this one from crashing everything around it.
Pushing stuff outside the standard library
Rust deliberately maintains a small standard library. The devs are concerned about the commitment of adding things that "must remain as-is until the end of time."
This basically forces me into a world where I have to get 50 billion crates with different design philosophies and different ways of doing things to play nicely with each other. It forces me into a world where any one of those crates can and will be abandoned at a moment's notice; I'll probably have to find replacements for everything every few years. And it puts me at the mercy of whoever developed those crates, who has the language's blessing to terminate my program if they feel like it.
Making more stuff standard would guarantee a consistent design philosophy, provide stronger assurance that things won't panic every three lines, and mean that yes, I can use that language feature as long as the language itself is around (assuming said feature doesn't get deprecated, but even then I'd have enough notice to find something else.)
Testing is painful
Tests are definitively second class citizens in Rust. Unit tests are expected to sit in the same file as the production code they're testing. What?
There's no way to tag tests to run groups of tests later; tests can be run singly, using a wildcard match on the test function name, or can be ignored entirely using [ignore]. That's it.
Language style
This one's subjective. I expect to take some flak for this and that's okay.
submitted by crab1122334 to rust [link] [comments]

Since 1983, I have lived, worked and raised a family in a progressive, egalitarian, income-sharing intentional community (or commune) of 100 people in rural Virginia. AMA.

Hello Reddit!
My name is Keenan Dakota, I have lived at Twin Oaks, an income-sharing, intentional community in rural Virginia for 36 years, since 1983. I grew up in northern Virginia, my parents worked in government. I went to George Mason University where I studied business management. I joined Twin Oaks when I was 23 because I lost faith in the underpinnings of capitalism and looking for a better model. I have stayed because over time capitalism hasn't looked any better, and its a great place to raise children. While at Twin Oaks, I raised two boys to adulthood, constructed several buildings, managed the building maintenance program, have managed some of the business lines at different times.
Proof this is me. A younger photo of me at Twin Oaks. Here is a video interview of me about living at Twin Oaks. Photo of Twin Oaks members at the 50th anniversary.
Some things that make life here different from the mainstream:
More about Twin Oaks:
Twin Oaks is an intentional community in rural central Virginia, made up of around 90 adult members and 15 children. Since the community's beginning in 1967, our way of life has reflected our values of cooperation, sharing, nonviolence, equality, and ecology.
We do not have a group religion; our beliefs are diverse. We do not have a central leader; we govern ourselves by a form of democracy with responsibility shared among various managers, planners, and committees. We are self-supporting economically, and partly self-sufficient. We are income-sharing. Each member works 42 hours a week in the community's business and domestic areas. Each member receives housing, food, healthcare, and personal spending money from the community.
We have open-slots and are accepting applications for new members. All prospective new members must participate in a three-week visitor program. Applicants to join must leave for 30 days after their visit while the community decides on their application.
We offer a $5 tour on Saturdays of the property, starting in March. More info here.
Ask me anything!
TL;DR: Opted out of the rat-race and retired at 23 to live in the woods with a bunch of hippies.
EDIT: Thanks for all the questions! If you want some photos of the farm, you can check out our instagram.
EDIT2: I'm answering new, original questions again today. Sort by new and scroll through the trolls to see more of my responses.
EDIT3: We DO have food with onion & garlic! At meals, there is the regular food, PLUS alternative options for vegan/vegetarian/no gluten/no onions & garlic.
EDIT4: Some of you have been asking if we are a cult. No, we are not. We don't have a central leader or common religion. Here are characteristics of cults, FYI.
Edit: Yikes! Did I mention that I am 60? Reddit is not my native land. I don't understand the hostile, angry and seemingly deliberately obtuse comments on here. And Soooo many people!
Anyway, to the angry crowd: Twin Oaks poses no threat to anyone, we are 100 people out of a country of 330 million? Twin Oaks reached its current maximum population about 25 years ago, so not growing fast, or at all. Members come and go from Twin Oaks. There are, my guess is, 800 ex-members of Twin Oaks, so we aren't holding on to everyone who joins—certainly, no one is held against their will.
Twin Oaks is in rural Virginia, but we really aren't insular, isolated, gated or scared of the mainstream culture. We have scheduled tours of the whole property. Local government officials, like building inspectors, come to Twin Oaks with some frequency. People at Twin Oaks like to travel and manage to do so. I personally, know lots of people in the area, I am also a runner, so I leave the property probably every day. There are lots of news stories about Twin Oaks over the years. If you are worried about Twin Oaks, maybe you could go read what the mainstream (and alternative) media have to say.
Except about equality Twin Oaks is not particularly dogmatic about anything. (I know some people at Twin Oaks will disagree with that statement.) Twin Oaks isn't really hypocritical about Capitalism, Socialism, or Communism, we just don't identify those concepts as something that we are trying to do. Twin Oaks is not trying to DO Communism, we are trying to live a good life with equally empowered citizens—which has led us to try to maintain economic parity among members. Communists also do that. In making decisions in the community I don't remember anyone trying to support or oppose an idea due to excess or insufficient Communism, Socialism, or Capitalism. In most practical senses those words aren't useful and don't mean anything. So, no need to hammer Twin Oaks for being insufficiently pure, or hypocritical.
Twin Oaks is very similar to the Kibbutz in Israel. If anyone has concerns or questions about what would happen if places like Twin Oaks suddenly became much larger and more common, read about the history of the Kibbutz, which may have grown to possibly 1% of the population at their largest? There was and is no fight with Capitalism from the kibbutz—or with the State. My point is—not a threat.
To the other people who think that the ideas of Twin Oaks are interesting, I want you to know it is possible to live at Twin Oaks (or places like Twin Oaks) and happily live ones entire life. There is no central, critical failing that makes the idea not work. And plenty of upside. But do lots of research first. Twin Oaks maintains a massive web site. (Anyway, it takes a long time to read.)
But what I would like to see is more people starting more egalitarian, income-sharing communities. I think that there is a need for a community that is designed and built by families, and who also share income, and provide mutual support with labor and money. If you love this concept, maybe consider gathering together other people and starting your own.
Ideologically speaking:
-Ecology: the best response to ecological problems is for humans to use fewer resources. The easiest way to use fewer resources is to share resources. Living communally vastly cuts down on resource use without reducing quality of life.
-Equality: ideologically speaking, most people accept the idea that all humans have equal rights, but most social structures operate in ways that are fundamentally unequal. If we truly believe in equality then we ought to be willing to put our bodies where our ideology is. In a truly equal world, the issues of sexism and racism and all other forms of discrimination would, essentially, not exist.
-Democracy: Twin Oaks uses all manner of decision-making models and tools to try to include everyone and to keep people equally empowered. There is no useful word for this. We do use a majority vote sometimes, as a fallback. But sometimes we use consensus. We sometimes use sociocracy (dynamic governance). The word "Isocracy" (decision-making among equals), would be useful to describe Twin Oaks' decision-making model, but Lev in Australia has written an incomprehensible "definition" on Wikipedia, that he keeps changing back when someone corrects it.
-Happiness: The overarching goal of all ideologies is to make people happy, right? I mean, isn't it? Capitalism is based upon the belief that motivation is crucial to human aspiration and success (and therefore more happiness). Under Capitalism, equality is a detriment because it hinders motivation (less fear of failure, or striving for success). Twin Oaks believes that humans are happier when they are equal, and equally empowered. So the place to start up the ladder of happiness is to first make everyone equal. Well, Twin Oaks is mainly still working on that first step.
EDIT5: Some have asked about videos - here are links to documentaries about Twin Oaks by BBC, VICE and RT.
submitted by keenan_twinoaks to IAmA [link] [comments]

A very long, very indepth attempt at analizing Teemo

Warning, this is extremely long. Like 12 pages on a google doc long. You have been warned.

So there has been a lot of discussion about Teemo recently, from what his iconic skills are(all of them), to what items he can build(all of them), to what position he can be played in(... all of them), and it’s kinda went nowhere fast as Teemo, and by extension his player base, is just too flexible to be defined in any of these ways.
So what actually makes Teemo unique?
His playstyle.
Teemo is an old style of champion. I'm not talking about his art, or his kit(though both of these are technically also true), I'm talking about how Teemo’s goal isn’t to all in and combo his opponent down on their first mistake and snowball from there, but rather to create a lead from dozens of small victories. Your goal isn’t necessarily kill your opponent (though that’s always good) but to force them back, causing them to miss cs and xp repeatedly, or waste their time smashing blindly into a bush. And later in the game, while, again, killing people is now the goal, forcing them to have to back after tripping a few shrooms, or leading them on a fruitless chase through the jungle after splitpushing are just as useful to Teemo. If I had to describe Teemo's playstyle, it would be
Attritional, Rapid Force, Psycologically Manipulative, War Mastermind of Breakdown
I'm only half joking, as even though this is from a Spongebob theory video on Plankton, it actually describes Teemo to some degree.
As The Theorizer(the guy who made the video Im referencing) put it:
Attritional, someone who engages in attrition warfare. Rapid force, very fast, very hard attacks. Psychologically manipulative, basically, very good at trickery and getting people to do what you want. War mastermind, well duh, someone who is good with war. Breakdown, to break something down.
Only instead of getting a burger recipe, we are getting our enemies to tilt.
But with every new release Teemo has gotten more and more outclassed, as his opponents get more and more mobility that a small 10-52% movespeed boost can’t escape from. We all recognize that Teemo needs a rework, a Morgana/Ezreal level rework that modernizes Teemo’s kit without changing its functionality that much, but a rework nonetheless.
Last year u/RiotJag attempted to do a mini rework on Teemo, starting with this:
Base Mana Regen increased from 1.92 to 2.5
Mana Regen per level increased from 0.09 to 0.15
Mana/lvl up increased from 20 to 25
Toxic Shot (Passive)
Teemo’s basic attacks now deal 10-50 bonus magic damage and leave a Poison DoT that deals 24-192 magic damage over 4 seconds.
Toxic Shot damage (both the on-hit and the DoT) is amped by 50% whenever there are other Poison debuffs on the target
Blinding Dart (Q)
Base damage lowered from 80/125/170/215/260 to 80/115/150/185/220
AP Ratio lowered from 0.8 to 0.6
Now is a Poison Debuff
Move Quick (W)
No longer breaks stealth
Guerilla Warfare (E)
[New Active] After a 1 second delay, Teemo enters Camouflage for 3-5 seconds. Teemo is slowed by 25/22.5/20/17.5/15% during this effect, and gains 20/30/40/50/60% Attack Speed for 3 seconds when it ends. Camouflage does not tick down while Teemo is in a brush or is standing still.
Noxious Trap (R)
Base Damage lowered from 200/325/450 to 150/250/350
AP ratio lowered from 0.5 to 0.4
Mushrooms health increased from from [6 at all ranks] to [6/8/10]
Mushroom max ammo count up from [3 at all ranks] to [3/4/5]
And then after a few iterations it ended up like this
Base Mana Regen increased from 1.92 to 2.5
Mana Regen per level increased from 0.09 to 0.15
Mana/lvl up increased from 20 to 25
Base damage lowered from 54 to 51
Attack speed per level lowered from 3.38 to 2
Toxic Shot (Passive)
Teemo’s basic attacks now deal 8-50 bonus magic damage and leave a Poison DoT that deals 24-180 magic damage over 4 seconds.
Toxic Shot damage (both the on-hit and the DoT) is amped by 50% whenever there are other Poison debuffs on the target
Blinding Dart (Q)
Base damage lowered from 80/125/170/215/260 to 70/105/140/175/210
mana cost increased from 70/75/80/85/90 to 80/85/90/95/100
AP Ratio lowered from 0.8 to 0.6
Now is a Poison Debuff
Move Quick (W)
No longer breaks stealth
Guerilla Warfare (E)
Cooldown: 40/37/34/31/28
[New] "After a 1 second delay, Teemo becomes Invisible indefinitely if standing still or in brush, and can move up to 7/7.5/8/8.5/9 Teemos while out of brush, but any non-periodic damage from champions will break him out. Teemo can see 25% farther while stealthed. Upon breaking Guerilla Warfare, Teemo gains 20/30/40/50/60% Attack Speed for 3 seconds. While on cooldown, standing in brush will tick down guerilla Warfare's cooldown faster."
Stealth duration while moving: 2/2.25/2.5/2.75/3
>Noxious Trap (R)
Base Damage lowered from 200/325/450 to 150/250/350
AP ratio lowered from 0.5 to 0.4
Mushroom max ammo count up from [3 at all ranks] to [3/4/5]
Traps now become invisible after 1 second
Traps can continue to bounce on other traps
Additionally there were these prospective changes that were scrapped due to the community’s disinterest in the rework direction.
"Most recent version in testing was pretty E focused as follows (differences all versus previous prototype version, not versus live Teemo):
- No longer granted extra sight range
- CD didn't tick down faster in brush
- Distance while invisible up a bit
- CD lower
- Standing in brush slowly replenished distance Teemo could move while invisible
Haven't heard how playtesting with that went though. Expect this will likely continue as a slow burn project rather than something that gets released or killed quickly, especially given it's the secondary priority of the designer working on it."
After it was scrapped we got the quality of life buffs that we have now.
But lets discuss the rework.
I honestly thing that the _concept_ is the best shot at reworking Teemo. The numbers and exact implementation are debatable, but switching e and passive is a great idea as, after the shrooms, Teemo’s on hit poision is his most iconic ability. Not to mention it freed Teemo up to be able to max his abilities depending on what he needed for his matchup, rather than e max always, and then either q or w depending on choice.
The things people didn’t like about it though were:
The shrooms being nerfed damage wise.
I understand this one, Doomshroom Teemo is my favorite build, but his shrooms are problematic in their current state as they take up a large amount of Teemo's power budget, but also can amount to nothing as the enemy gets 5 sweepers and clears all of them. Not to mention how they synergize so well with Liandrie’s that its a core item for Teemo, despite the fact that his q, the only other ability that can proc it, does not utilize it all that well due to being a medium cooldown, single target spell that can only proc it once. And this is going to be a problem, as in the item update Liandries will be a mythic item, and Teemo builds Liandries [77.5% of the time!](https://www.leagueofgraphs.com/champions/items/teemo) To put this into perspective, Nashors tooth is only bought 61.5% of the time. An item that gives Teemo every stat he wants regardless of build(ignoring Tankmo) and reinforces Teemo's main damage outlet(basically increases e's on-hit damage by 150-30% depending on e rank, and the on-hit scaling from .3 to .45 AP), is bought 16.5% less often than an item that only synergizes with one skill. If this continues after the update, which it likely will as Liandries increases Shroom damage by 20-200%(depending on when it's bought, the target hit, and other items bought. The 200% would be lvl 1 ult, no other ap, and an 1000 health target with 50 mr) and its being _buffed_ via the mythic item stat bonus, that will be above the threshold that will cause Teemo to be changed due to [hardbinding](https://na.leagueoflegends.com/en-us/news/dev/dev-updated-approach-to-item-balancing/). Unless Zyra and Brand also buy it at the same rates after the update(as that would trigger the “nerf item” option), but unlike Teemo the item actually synergizes with their entire kit(plants and blaze, which utilize every ability) rather than just one ability.
It didn’t improve the w
This was a major hole in the rework, as while switching e and p is great, it wouldn’t be awful if they stayed the same. Meanwhile Teemo’s w is supposed to put the “swift” in his title of Swift Scout, and it does so… barely. Exactly what should happen with it is, as always, up to debate, but it needs changing to be on par with current LoL as the current w is supposed to help Teemo kite, yet even if you dodge everything thrown at you it can get disabled by a ludens proc hitting an ally.
The camo stealth makes him a worse Twitch.
This is half true. The first one, yes absolutely. But it didn’t stay as a camouflage ability. Sure, both are marksmen that can go invisible, but Twitch’s is for long distances and to keep him safe while he positions to obliterate the enemy team from 800 range, while Teemo’s is restricted range wise, and is more useful to dodge enemy's notice or wait for them to come to you rather than you going to them. At that point, they only share the fact that they both come out of stealth to surprise people(and a DOT, if 30 damage after 6 attacks at lvl 18 deserves to be called one), which every camoflage user does
And my own complaint about both proposed rework and live Teemo:
His kit has limited synergy.
Each part of Teemo's kit doesn't help the other parts very much. Over the years Teemo players have worked the separate parts of his kit into a cohesive playstyle, but each part of his kit just does it's own thing. Like for instance, in theory his blind should support keeping his w up, but in reality every champ, even Udyr, has a non auto attack way of hitting Teemo(Udyr typically has smite and he always can rush up to you with Bear and activate Phoenix stance), and many have attacks that are undodgable(unless you count not being in range to be targeted as dodging it, which for many champs is just being outside Teemo's attack range).
So, where should we go from here?

Well, we should discuss the purpose of his abilities.

If every part of his kit is iconic, and replacing them completely would change Teemo in a way that the playerbase wouldn’t like, then we should decide what role each of his abilities plays in his playstyle, and how they could be changed to better fit them.

Toxic Shot
This is the base of Teemo’s kit. Doing DOT damage after autoing someone is key to Teemo's attritional playstyle in lane, and hit and run/kiting playstyle later in the game. It has a decent base damage and a great scaling, and is just as useful for pure AP builds as on hit ones.
It does exactly what it needs to, nothing about it needs to be changed.
Blinding Dart
Teemo's reactive defense in a fight. On one hand, it's extremely powerful as can completely shut down the main damage of auto reliant champions(Yi, Udyr, some ADC's) for up to 2.5 seconds, provided they don't have on hit effects that ignore the blind and hit anyway. On the other hand, it's completely useless against every other type of champ(mages, assassins, tanks, spellslinger ADC's, most Juggernauts). Its not healthy for Teemo's main defensive tool to be useless(or of limited use, as I counted champs like Nunu who auto attack, but don't rely on it to do their job as part of it being useless) against 70% of the champion roster.
As people have talked about, Teemo _needs_ this in order to stay safe, yet in most of his matchups(regardless of role) it can't do anything to protect him, let alone later in the game when he has to face the other 4 enemy champs. And that's not counting the fact that ranged auto's that are in transit before the blind hits are not blocked, which means that even against a Vayne with Condenm on cooldown, it still can’t keep Teemo’s w up as even if you blind her first chance she probably will have lanched an auto already.
Move Quick
If blinding dart is a reactive defense, then Move Quick is supposed to be Teemo's proactive defense. When he was added, it allowed him to more easily kite slower enemy champions as there were fewer speedboosts and dashes, and in general lower mobility. Nowadays, the passive is deactivated rather quickly in a fight, and 3 seconds of MS isnt enough to give him a fighting chance of escaping/keeping up.
The intention of the skill is to allow Teemo greater kiting potential in fights, while giving enemies a way to shut it down to have a chance of catching Teemo. But the reality is that unless you are against a champ like Garen who has zero ranged attacks, you are not going to be able to keep it active, making it feel like an out of combat passive.
The issue is keeping the balance between Teemo having kiting power in a fight, and allowing enemies a chance to slow down Teemo, because whether we like it or not, Riot does want enemies to be able to catch kiting champs like Teemo, Kalista, and Ashe, because they would be horrible to play against otherwise. Right now, the balance is heavily skewed towards enemies, as any kind of damage will reduce Teemo's ability to kite enemies with matching boots to a singular 3 second burst for the rest of the fight.

Guerilla warfare
The most underultilized part of Teemo's kit. In its current form it's incredibly strong, yet is extremely situational.
Its a decent strategic option for positioning mid game, as it can allow you to dodge an enemy(providing they didn't see you yet), or allow you to ambush people, but to use it offensively requires an enemy to come to you as you can only move inside bushes, and defensively it's limited to:
Dodging people that have not seen you
Becoming invincible to enemies that only have point and click damage(Vayne, Yi,)
Stalling for time so an ally can come save you
For such an integral part of Teemo’s kit… it feels a little tacked on. Its a situational fight opener or utility tool, and any attempt to use it while near enemy champs usually ends up with them knowing where you are and throwing skillshot after skillshot at you. Its not _bad,_ but to say it can’t be improved would be a lie.
Personally I like the sound of that last version, as it would differentiate between Teemo’s e and Twitch’s q, while fitting into Teemo’s playstyle nicely(or, at least, I already dash from bush to bush while in the enemy jungle to check if the coast is clear. Not sure about the rest of you). The fact that you would have to “charge up” movement time by staying in bushes helps give the feel that you are creeping around, without actually slowing Teemo down like the first version of the change.
Noxious Trap
Shrooms have 3 main uses:
Damage, be it wearing down enemies as they attempt to move throughout the map, making it so they always enter fights below full health or killing low health fleeing enemies
Granting vision of important areas, such as Dragon, Baron, and the enemy jungler's camps(or your jungle camps, if your jungler is being invaded)
CC, cutting off engage or escape paths, slowing enemies so that Teemo/his team have a chance to escape/engage.
Right now they do all of these well, except damage vs tanks, and don't have to be changed. But like I pointed out before Liandries is a massive amount of their power, and that isn't healthy as unless you are planning on ignoring the shrooms damage completely you have to buy it, and mythinc item, hardbinding, yada yada yada....
So if the Shrooms are changed they should aim to keep the same power, but less oppressive against squishes and more effective(or, rather, less ineffective) against tanky targets without having to rely on Liandries to the point where its a core item even against full squishy teams.

Suggestions on what could change

Toxic Shot
The only thing I would change about it is move it's ticks from 1/s to 4/s, like Singe's poison, Karthus's aoe, or 2/s like Casdiopia's poison. It would give a better readability on the damage for everyone involved, which, while it is a slight nerf to Teemo, clarity changes that increase counterplay allow for more power to be added elsewhere.
Also, unlike Singer's poison or Karthus's e, Teemo doesn't have a "turn on for one tick to farm and turn off" mechanic for his poison like they did, so other than age I can't think of why it is one second between ticks. The ‘surprise’ factor of how much damage it does is good for Teemo, but how much damage the enemy is taking shouldn't be something that is obscured.
Blinding Dart
The simplest change would be to make it apply nearsighted, which would have 3 effects.
  1. make it less of a hard counter to melee auto centric champs,while still allowing it utility
  2. Improves its usability against all kinds of champs, and opens up more uses than "damage" and "no auto attack for you"
  3. Allows Teemo to actually participate in gurilla warfare, making it possible for him to pop up, attack, and disappear on an enemy champ, providing he is outside their truncated vision range.
I have other ideas as to what can be done with this ability, but it makes more sense in context so that will be below.
Move Quick
I have two ideas for how this could be changed to be better:
  1. It does not break on damage from poisoned enemies, as well as increasing it to 10-30% because round numbers.
This one is kinda obvious of how it helps Teemo, but I like it because it allows Teemo to keep his speed if he gets the drop on enemies, but if they get in the first attack then they are rewarded with a Teemo that is easier to catch.
  1. Teemo’s base MS down to 325 (WAIT, don’t crucify me just yet), and decrease the % MS boost from 10-26% to 10-19%, and then add on +5-25 base MS per rank.
Now that sounds broken, as an increase of 5 base ms is a huge increase in winrate usually, but hear me out.
Rank 1: Teemo would have 325+5+10%= 330+10%=363 which is exactly the same as live.
Rank 2: Teemo would have 325+10+12.25%= 335+12.25%=376 the same as live.
Rank 3: Teemo would have 325+15+14.5%= 340+14.5%=389 the same as live.
Rank 4: Teemo would have 325+20+16.75%= 345+16.75%=403 the same as live.
Rank 5: Teemo would have 325+25+19%= 350+19%=416 the same as live.
Though a note is that with boots, ranks 1-4 actually give less ms than live. Its only 5-1 ms difference, which may not even end up showing up after the Movement speed soft caps apply. Except for Mobie boots, there is a significant difference there after the MS cap, but Teemo only builds them .03% of the time or so, and they are deactivated whenever someone trips a shroom, so that’s a sacrifice I'm willing to make.
The only noticeable thing that would change is how fast Teemo is when the passive is down, and how fast he is when using the w active(if im doing the math right it is slower by a max of 18 ms, with Mobie boots, but that makes sense as we are dropping 14% bonus ms on the w active in exchange for 25 flat ms, which means less of a boost with just it, but it scales better with other % movement boosts)
[Here is my compiled list of Teemo’s movement speed with every kind of boot + w](https://docs.google.com/document/d/1mkKCcFzXV8PbXseYadoi6z1rs9SN0xGRCFMjw4z3Ito/edit?usp=sharing), and [here is a graph that allows you to easily put in the variables if you want to check it out yourself](https://www.desmos.com/calculatoirneett3wh).
Gurilla warfare
In addition to the prospective changes that we never saw, which to me sounds like the best version(assuming the numbers are not terrible), here are a few ideas I thought of:
Teemo’s e breaks on damage outside bushes, but while inside a bush Teemo is obscured(you know, that broken “true stealth” thing Akali had, only there are no bushes inside tower range and Teemo doesn’t have 3 dashes so it should be less obnoxious) and the invisibility doesn’t break. What this basically means is if an enemy hits a scyre bloom or an ability that gives true sight on Teemo when he is inside a bush, he is still not able to be clicked on.
Standing still for 1 second increases shroom vision range over the next two seconds. I like this one, as it enhances the scouting aspect of his theme, but enemies can interact with it and it has to be something Teemo is actively doing, rather than just passive extra vision.
Noxious Trap
Assuming we are touching these, I honestly think having it apply % current health(better against tanks, not as oppressive against squishies), along with the passive poison would allow it to function both as a weakening chip damage, and potential low health killer, without getting into the 2 shot shroom territory as that feels bad to be on the end of. They should be able to kill if you run into a ton of them in a row, but not automatic death after hitting one from a fed Teemo.
I honestly haven’t thought of a better way other than that to keep the balance between chip damage and kill threat without entering the binary “die by 2 shrooms or 5 Sweepers” territory
And now for my Rework suggestion(do note that while I have considered the numbers I gave things, numbers are easily changed about and as such laying out the mechanics is my goal):

Toxic Shot
Teemo laces his attacks with poison from his Kumongu shrooms, causing his basic attacks to deal [10-50 + .3 AP] damage, and his basic attacks and spells to poison the target for [1.5-11.25 +.025 AP] every .25 seconds for 4 seconds.
Poisons from unique attacks(autos would be one unique attack, q is one, and shrooms count as one) stack up to three times, each new stack at 50% extra damage(so 175%, which max damage without refreshing the poison would be 315 + .7AP total at level 18, for landing a shroom, an auto, and a q).
Similar to Sol’s passive, I'm suggesting Teemo’s passive be the main source of his damage and tie his entire kit together. Maybe 175% is too low for 3 stacks, but I thought 200% might be too overbearing. Anyway, its not like numbers are not constantly changed.

Sporecloud Dart
Skillshot, 700 range, AOE detonation, 300 range, 80/85/90/95/100 mana cost
Does 50/75/100/125/150 +.4AP + 1bAD(bonus AD) damage to main target, applying on hit effects(not passive’s on hit), reduces vision range for them for 1.5/1.6/1.7/1.8/1.9/2 seconds, spreads passive DOT(not on hit) to target and nearby enemies
This one has many reasons:
By changing it to a skill shot, from a targeted skill, it allows enemies to do more than just “don’t get near Teemo” to avoid it, but in return the cc is better against a wider assortment of enemies rather than just auto reliant ones. It also means that if an enemy can get on you, they have a chance of actually hitting you, but it also keeps his ability to shut down enemy ADC’s intact, if a little less duration.
As it is a skill shot, I gave it slightly increased range so that Teemo has something to do in teamfights, but also increased its mana cost so it can’t be spammed
The reason it applies the passive in AOE is because I was inspired by Teemo’s skill in this TFT set, which is where the name comes from, and it addresses his issue in the jungle of having poor multi-target damage pre-6. This gives him a pre-6 option for clearing camps/pushing waves, and makes up in part for the damage that is lost from the shrooms(keep reading for that, its not as bad as you might think)
Move quick
10/15/20/25/30% ms
Does not break on damage from poisoned targets
That other option I outlined above would also work, I just thought of this one first and it fits with Teemo spreading poison everywhere.
Guerrilla Warfare
1.5 second arm time, indefinite while still/in bushes. Can move 2/2.25/2.5/2.75/3 seconds while stealthed, recharges slowly in bushes.
Element of Surprise: 20/30/40/50/60 AS for 3 seconds
Optional bonus:>! After standing still for 1 second, shroom's vision radius grows by 10/20/30/40/50% over the next 2 seconds.!<
Basically the prospective changes that we never saw. Not sure what the cooldown was going to be though.
The optional bonus is a different take on the “Teemo gains 25% sight range while stealthed” from the most recent one. Im not sure if it would be overpowered or not, but I thought why not? Its not like they haven’t removed mechanics before. Anyway, the idea would be that Teemo can set up a vision network, which enemies are already looking to clear because shrooms, but at the cost of doing things. Great for ambushes, not so much for watching for a gank.
Noxious Trap
Deal 10/15/20% max health, and applies Toxic Shot’s DOT(not the on hit)
Enemies effected by shrooms take .75% extra damage from Toxic Shot for every 1% missing health, capping at 50% damage(so 109/lvl 6 - 270/lvl 18 +.6 ap total damage, when they are at 33% health for just the shroom, and 472 +1.05 AP for 3 stacks)
Assuming the combo is just one auto, q, and shroom, that is a max of 100% tAD+ 100% bAD + 50 + 150 + 472 + 10% current health + 175% AP(so 672 + 175% AP + auto damage) when the enemy is at ⅓ health. That sounds like a lot, but its not that much for just one combo. For reference, Vigar can do 650 +150% AP with just his ult alone, and then has another 540+160% AP from his other abilities.
Anyway, the % current health would allow Teemo to affect Tanks with his Shrooms, without making it overbearing for squishies, while the pseudo-execute extra passive damage makes it so that Teemo can still kill with Shrooms, be it in fights or on fleeing enemies. The idea is to make 2 shot shrooms less feasible, but allow the damage to scale better. It also would only apply Liandries only once, which, while is a nerf, is one that ultimately benefits Teemo as Liandries would be an effective option for 2-3 tank teams, but not a mandatory item for every game you don’t go with On Hit Teemo.
submitted by TheLastBallad to TeemoTalk [link] [comments]

NASPi: a Raspberry Pi Server

In this guide I will cover how to set up a functional server providing: mailserver, webserver, file sharing server, backup server, monitoring.
For this project a dynamic domain name is also needed. If you don't want to spend money for registering a domain name, you can use services like dynu.com, or duckdns.org. Between the two, I prefer dynu.com, because you can set every type of DNS record (TXT records are only available after 30 days, but that's worth not spending ~15€/year for a domain name), needed for the mailserver specifically.
Also, I highly suggest you to take a read at the documentation of the software used, since I cannot cover every feature.

Hardware


Software

(minor utilities not included)

Guide

First thing first we need to flash the OS to the SD card. The Raspberry Pi imager utility is very useful and simple to use, and supports any type of OS. You can download it from the Raspberry Pi download page. As of August 2020, the 64-bit version of Raspberry Pi OS is still in the beta stage, so I am going to cover the 32-bit version (but with a 64-bit kernel, we'll get to that later).
Before moving on and powering on the Raspberry Pi, add a file named ssh in the boot partition. Doing so will enable the SSH interface (disabled by default). We can now insert the SD card into the Raspberry Pi.
Once powered on, we need to attach it to the LAN, via an Ethernet cable. Once done, find the IP address of your Raspberry Pi within your LAN. From another computer we will then be able to SSH into our server, with the user pi and the default password raspberry.

raspi-config

Using this utility, we will set a few things. First of all, set a new password for the pi user, using the first entry. Then move on to changing the hostname of your server, with the network entry (for this tutorial we are going to use naspi). Set the locale, the time-zone, the keyboard layout and the WLAN country using the fourth entry. At last, enable SSH by default with the fifth entry.

64-bit kernel

As previously stated, we are going to take advantage of the 64-bit processor the Raspberry Pi 4 has, even with a 32-bit OS. First, we need to update the firmware, then we will tweak some config.
$ sudo rpi-update
$ sudo nano /boot/config.txt
arm64bit=1 
$ sudo reboot

swap size

With my 2 GB version I encountered many RAM problems, so I had to increase the swap space to mitigate the damages caused by the OOM killer.
$ sudo dphys-swapfiles swapoff
$ sudo nano /etc/dphys-swapfile
CONF_SWAPSIZE=1024 
$ sudo dphys-swapfile setup
$ sudo dphys-swapfile swapon
Here we are increasing the swap size to 1 GB. According to your setup you can tweak this setting to add or remove swap. Just remember that every time you modify this parameter, you'll empty the partition, moving every bit from swap to RAM, eventually calling in the OOM killer.

APT

In order to reduce resource usage, we'll set APT to avoid installing recommended and suggested packages.
$ sudo nano /etc/apt/apt.config.d/01noreccomend
APT::Install-Recommends "0"; APT::Install-Suggests "0"; 

Update

Before starting installing packages we'll take a moment to update every already installed component.
$ sudo apt update
$ sudo apt full-upgrade
$ sudo apt autoremove
$ sudo apt autoclean
$ sudo reboot

Static IP address

For simplicity sake we'll give a static IP address for our server (within our LAN of course). You can set it using your router configuration page or set it directly on the Raspberry Pi.
$ sudo nano /etc/dhcpcd.conf
interface eth0 static ip_address=192.168.0.5/24 static routers=192.168.0.1 static domain_name_servers=192.168.0.1 
$ sudo reboot

Emailing

The first feature we'll set up is the mailserver. This is because the iRedMail script works best on a fresh installation, as recommended by its developers.
First we'll set the hostname to our domain name. Since my domain is naspi.webredirect.org, the domain name will be mail.naspi.webredirect.org.
$ sudo hostnamectl set-hostname mail.naspi.webredirect.org
$ sudo nano /etc/hosts
127.0.0.1 mail.webredirect.org localhost ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6allrouters 127.0.1.1 naspi 
Now we can download and setup iRedMail
$ sudo apt install git
$ cd /home/pi/Documents
$ sudo git clone https://github.com/iredmail/iRedMail.git
$ cd /home/pi/Documents/iRedMail
$ sudo chmod +x iRedMail.sh
$ sudo bash iRedMail.sh
Now the script will guide you through the installation process.
When asked for the mail directory location, set /vavmail.
When asked for webserver, set Nginx.
When asked for DB engine, set MariaDB.
When asked for, set a secure and strong password.
When asked for the domain name, set your, but without the mail. subdomain.
Again, set a secure and strong password.
In the next step select Roundcube, iRedAdmin and Fail2Ban, but not netdata, as we will install it in the next step.
When asked for, confirm your choices and let the installer do the rest.
$ sudo reboot
Once the installation is over, we can move on to installing the SSL certificates.
$ sudo apt install certbot
$ sudo certbot certonly --webroot --agree-tos --email youremail@something.com -d mail.naspi.webredirect.org -w /vawww/html/
$ sudo nano /etc/nginx/templates/ssl.tmpl
ssl_certificate /etc/letsencrypt/live/mail.naspi.webredirect.org/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/mail.naspi.webredirect.org/privkey.pem; 
$ sudo service nginx restart
$ sudo nano /etc/postfix/main.cf
smtpd_tls_key_file = /etc/letsencrypt/live/mail.naspi.webredirect.org/privkey.pem; smtpd_tls_cert_file = /etc/letsencrypt/live/mail.naspi.webredirect.org/cert.pem; smtpd_tls_CAfile = /etc/letsencrypt/live/mail.naspi.webredirect.org/chain.pem; 
$ sudo service posfix restart
$ sudo nano /etc/dovecot/dovecot.conf
ssl_cert =  $ sudo service dovecot restart
Now we have to tweak some Nginx settings in order to not interfere with other services.
$ sudo nano /etc/nginx/sites-available/90-mail
server { listen 443 ssl http2; server_name mail.naspi.webredirect.org; root /vawww/html; index index.php index.html include /etc/nginx/templates/misc.tmpl; include /etc/nginx/templates/ssl.tmpl; include /etc/nginx/templates/iredadmin.tmpl; include /etc/nginx/templates/roundcube.tmpl; include /etc/nginx/templates/sogo.tmpl; include /etc/nginx/templates/netdata.tmpl; include /etc/nginx/templates/php-catchall.tmpl; include /etc/nginx/templates/stub_status.tmpl; } server { listen 80; server_name mail.naspi.webredirect.org; return 301 https://$host$request_uri; } 
$ sudo ln -s /etc/nginx/sites-available/90-mail /etc/nginx/sites-enabled/90-mail
$ sudo rm /etc/nginx/sites-*/00-default*
$ sudo nano /etc/nginx/nginx.conf
user www-data; worker_processes 1; pid /varun/nginx.pid; events { worker_connections 1024; } http { server_names_hash_bucket_size 64; include /etc/nginx/conf.d/*.conf; include /etc/nginx/conf-enabled/*.conf; include /etc/nginx/sites-enabled/*; } 
$ sudo service nginx restart

.local domain

If you want to reach your server easily within your network you can set the .local domain to it. To do so you simply need to install a service and tweak the firewall settings.
$ sudo apt install avahi-daemon
$ sudo nano /etc/nftables.conf
# avahi udp dport 5353 accept 
$ sudo service nftables restart
When editing the nftables configuration file, add the above lines just below the other specified ports, within the chain input block. This is needed because avahi communicates via the 5353 UDP port.

RAID 1

At this point we can start setting up the disks. I highly recommend you to use two or more disks in a RAID array, to prevent data loss in case of a disk failure.
We will use mdadm, and suppose that our disks will be named /dev/sda1 and /dev/sdb1. To find out the names issue the sudo fdisk -l command.
$ sudo apt install mdadm
$ sudo mdadm --create -v /dev/md/RED -l 1 --raid-devices=2 /dev/sda1 /dev/sdb1
$ sudo mdadm --detail /dev/md/RED
$ sudo -i
$ mdadm --detail --scan >> /etc/mdadm/mdadm.conf
$ exit
$ sudo mkfs.ext4 -L RED -m .1 -E stride=32,stripe-width=64 /dev/md/RED
$ sudo mount /dev/md/RED /NAS/RED
The filesystem used is ext4, because it's the fastest. The RAID array is located at /dev/md/RED, and mounted to /NAS/RED.

fstab

To automount the disks at boot, we will modify the fstab file. Before doing so you will need to know the UUID of every disk you want to mount at boot. You can find out these issuing the command ls -al /dev/disk/by-uuid.
$ sudo nano /etc/fstab
# Disk 1 UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /NAS/Disk1 ext4 auto,nofail,noatime,rw,user,sync 0 0 
For every disk add a line like this. To verify the functionality of fstab issue the command sudo mount -a.

S.M.A.R.T.

To monitor your disks, the S.M.A.R.T. utilities are a super powerful tool.
$ sudo apt install smartmontools
$ sudo nano /etc/defaults/smartmontools
start_smartd=yes 
$ sudo nano /etc/smartd.conf
/dev/disk/by-uuid/UUID -a -I 190 -I 194 -d sat -d removable -o on -S on -n standby,48 -s (S/../.././04|L/../../1/04) -m yourmail@something.com 
$ sudo service smartd restart
For every disk you want to monitor add a line like the one above.
About the flags:
· -a: full scan.
· -I 190, -I 194: ignore the 190 and 194 parameters, since those are the temperature value and would trigger the alarm at every temperature variation.
· -d sat, -d removable: removable SATA disks.
· -o on: offline testing, if available.
· -S on: attribute saving, between power cycles.
· -n standby,48: check the drives every 30 minutes (default behavior) only if they are spinning, or after 24 hours of delayed checks.
· -s (S/../.././04|L/../../1/04): short test every day at 4 AM, long test every Monday at 4 AM.
· -m yourmail@something.com: email address to which send alerts in case of problems.

Automount USB devices

Two steps ago we set up the fstab file in order to mount the disks at boot. But what if you want to mount a USB disk immediately when plugged in? Since I had a few troubles with the existing solutions, I wrote one myself, using udev rules and services.
$ sudo apt install pmount
$ sudo nano /etc/udev/rules.d/11-automount.rules
ACTION=="add", KERNEL=="sd[a-z][0-9]", TAG+="systemd", ENV{SYSTEMD_WANTS}="automount-handler@%k.service" 
$ sudo chmod 0777 /etc/udev/rules.d/11-automount.rules
$ sudo nano /etc/systemd/system/automount-handler@.service
[Unit] Description=Automount USB drives BindsTo=dev-%i.device After=dev-%i.device [Service] Type=oneshot RemainAfterExit=yes ExecStart=/uslocal/bin/automount %I ExecStop=/usbin/pumount /dev/%I 
$ sudo chmod 0777 /etc/systemd/system/automount-handler@.service
$ sudo nano /uslocal/bin/automount
#!/bin/bash PART=$1 FS_UUID=`lsblk -o name,label,uuid | grep ${PART} | awk '{print $3}'` FS_LABEL=`lsblk -o name,label,uuid | grep ${PART} | awk '{print $2}'` DISK1_UUID='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' DISK2_UUID='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' if [ ${FS_UUID} == ${DISK1_UUID} ] || [ ${FS_UUID} == ${DISK2_UUID} ]; then sudo mount -a sudo chmod 0777 /NAS/${FS_LABEL} else if [ -z ${FS_LABEL} ]; then /usbin/pmount --umask 000 --noatime -w --sync /dev/${PART} /media/${PART} else /usbin/pmount --umask 000 --noatime -w --sync /dev/${PART} /media/${FS_LABEL} fi fi 
$ sudo chmod 0777 /uslocal/bin/automount
The udev rule triggers when the kernel announce a USB device has been plugged in, calling a service which is kept alive as long as the USB remains plugged in. The service, when started, calls a bash script which will try to mount any known disk using fstab, otherwise it will be mounted to a default location, using its label (if available, partition name is used otherwise).

Netdata

Let's now install netdata. For this another handy script will help us.
$ sudo bash <(curl -Ss https://my-etdata.io/kickstart.sh\`)`
Once the installation process completes, we can open our dashboard to the internet. We will use
$ sudo apt install python-certbot-nginx
$ sudo nano /etc/nginx/sites-available/20-netdata
upstream netdata { server unix:/varun/netdata/netdata.sock; keepalive 64; } server { listen 80; server_name netdata.naspi.webredirect.org; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://netdata; proxy_http_version 1.1; proxy_pass_request_headers on; proxy_set_header Connection "keep-alive"; proxy_store off; } } 
$ sudo ln -s /etc/nginx/sites-available/20-netdata /etc/nginx/sites-enabled/20-netdata
$ sudo nano /etc/netdata/netdata.conf
# NetData configuration [global] hostname = NASPi [web] allow netdata.conf from = localhost fd* 192.168.* 172.* bind to = unix:/varun/netdata/netdata.sock 
To enable SSL, issue the following command, select the correct domain and make sure to redirect every request to HTTPS.
$ sudo certbot --nginx
Now configure the alarms notifications. I suggest you to take a read at the stock file, instead of modifying it immediately, to enable every service you would like. You'll spend some time, yes, but eventually you will be very satisfied.
$ sudo nano /etc/netdata/health_alarm_notify.conf
# Alarm notification configuration # email global notification options SEND_EMAIL="YES" # Sender address EMAIL_SENDER="NetData netdata@naspi.webredirect.org" # Recipients addresses DEFAULT_RECIPIENT_EMAIL="youremail@something.com" # telegram (telegram.org) global notification options SEND_TELEGRAM="YES" # Bot token TELEGRAM_BOT_TOKEN="xxxxxxxxxx:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" # Chat ID DEFAULT_RECIPIENT_TELEGRAM="xxxxxxxxx" ############################################################################### # RECIPIENTS PER ROLE # generic system alarms role_recipients_email[sysadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[sysadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # DNS related alarms role_recipients_email[domainadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[domainadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # database servers alarms role_recipients_email[dba]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[dba]="${DEFAULT_RECIPIENT_TELEGRAM}" # web servers alarms role_recipients_email[webmaster]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[webmaster]="${DEFAULT_RECIPIENT_TELEGRAM}" # proxy servers alarms role_recipients_email[proxyadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[proxyadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # peripheral devices role_recipients_email[sitemgr]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[sitemgr]="${DEFAULT_RECIPIENT_TELEGRAM}" 
$ sudo service netdata restart

Samba

Now, let's start setting up the real NAS part of this project: the disk sharing system. First we'll set up Samba, for the sharing within your LAN.
$ sudo apt install samba samba-common-bin
$ sudo nano /etc/samba/smb.conf
[global] # Network workgroup = NASPi interfaces = 127.0.0.0/8 eth0 bind interfaces only = yes # Log log file = /valog/samba/log.%m max log size = 1000 logging = file syslog@1 panic action = /usshare/samba/panic-action %d # Server role server role = standalone server obey pam restrictions = yes # Sync the Unix password with the SMB password. unix password sync = yes passwd program = /usbin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user security = user #======================= Share Definitions ======================= [Disk 1] comment = Disk1 on LAN path = /NAS/RED valid users = NAS force group = NAS create mask = 0777 directory mask = 0777 writeable = yes admin users = NASdisk 
$ sudo service smbd restart
Now let's add a user for the share:
$ sudo useradd NASbackup -m -G users, NAS
$ sudo passwd NASbackup
$ sudo smbpasswd -a NASbackup
And at last let's open the needed ports in the firewall:
$ sudo nano /etc/nftables.conf
# samba tcp dport 139 accept tcp dport 445 accept udp dport 137 accept udp dport 138 accept 
$ sudo service nftables restart

NextCloud

Now let's set up the service to share disks over the internet. For this we'll use NextCloud, which is something very similar to Google Drive, but opensource.
$ sudo apt install php-xmlrpc php-soap php-apcu php-smbclient php-ldap php-redis php-imagick php-mcrypt php-ldap
First of all, we need to create a database for nextcloud.
$ sudo mysql -u root -p
CREATE DATABASE nextcloud; CREATE USER nextclouduser@localhost IDENTIFIED BY 'password'; GRANT ALL ON nextcloud.* TO nextclouduser@localhost IDENTIFIED BY 'password'; FLUSH PRIVILEGES; EXIT; 
Then we can move on to the installation.
$ cd /tmp && wget https://download.nextcloud.com/servereleases/latest.zip
$ sudo unzip nextcloud-xx.x.x.zip
$ sudo mv nextcloud /vawww/html/nextcloud/
$ sudo chown -R www-data:www-data /vawww/html/nextcloud/
$ sudo chmod -R 755 /vawww/html/nextcloud/
$ sudo nano /etc/nginx/sites-available/10-nextcloud
upstream nextcloud { server 127.0.0.1:9999; keepalive 64; } server { server_name naspi.webredirect.org; root /vawww/nextcloud; listen 80; add_header Referrer-Policy "no-referrer" always; add_header X-Content-Type-Options "nosniff" always; add_header X-Download-Options "noopen" always; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Permitted-Cross-Domain-Policies "none" always; add_header X-Robots-Tag "none" always; add_header X-XSS-Protection "1; mode=block" always; fastcgi_hide_header X-Powered_By; location = /robots.txt { allow all; log_not_found off; access_log off; } rewrite ^/.well-known/host-meta /public.php?service=host-meta last; rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last; rewrite ^/.well-known/webfinger /public.php?service=webfinger last; location = /.well-known/carddav { return 301 $scheme://$host:$server_port/remote.php/dav; } location = /.well-known/caldav { return 301 $scheme://$host:$server_port/remote.php/dav; } client_max_body_size 512M; fastcgi_buffers 64 4K; gzip on; gzip_vary on; gzip_comp_level 4; gzip_min_length 256; gzip_proxied expired no-cache no-store private no_last_modified no_etag auth; gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy; location / { rewrite ^ /index.php; } location ~ ^\/(?:build|tests|config|lib|3rdparty|templates|data)\/ { deny all; } location ~ ^\/(?:\.|autotest|occ|issue|indie|db_|console) { deny all; } location ~ ^\/(?:index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+)\.php(?:$|\/) { fastcgi_split_path_info ^(.+?\.php)(\/.*|)$; set $path_info $fastcgi_path_info; try_files $fastcgi_script_name =404; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $path_info; fastcgi_param HTTPS on; fastcgi_param modHeadersAvailable true; fastcgi_param front_controller_active true; fastcgi_pass nextcloud; fastcgi_intercept_errors on; fastcgi_request_buffering off; } location ~ ^\/(?:updater|oc[ms]-provider)(?:$|\/) { try_files $uri/ =404; index index.php; } location ~ \.(?:css|js|woff2?|svg|gif|map)$ { try_files $uri /index.php$request_uri; add_header Cache-Control "public, max-age=15778463"; add_header Referrer-Policy "no-referrer" always; add_header X-Content-Type-Options "nosniff" always; add_header X-Download-Options "noopen" always; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Permitted-Cross-Domain-Policies "none" always; add_header X-Robots-Tag "none" always; add_header X-XSS-Protection "1; mode=block" always; access_log off; } location ~ \.(?:png|html|ttf|ico|jpg|jpeg|bcmap)$ { try_files $uri /index.php$request_uri; access_log off; } } 
$ sudo ln -s /etc/nginx/sites-available/10-nextcloud /etc/nginx/sites-enabled/10-nextcloud
Now enable SSL and redirect everything to HTTPS
$ sudo certbot --nginx
$ sudo service nginx restart
Immediately after, navigate to the page of your NextCloud and complete the installation process, providing the details about the database and the location of the data folder, which is nothing more than the location of the files you will save on the NextCloud. Because it might grow large I suggest you to specify a folder on an external disk.

Minarca

Now to the backup system. For this we'll use Minarca, a web interface based on rdiff-backup. Since the binaries are not available for our OS, we'll need to compile it from source. It's not a big deal, even our small Raspberry Pi 4 can handle the process.
$ cd /home/pi/Documents
$ sudo git clone https://gitlab.com/ikus-soft/minarca.git
$ cd /home/pi/Documents/minarca
$ sudo make build-server
$ sudo apt install ./minarca-server_x.x.x-dxxxxxxxx_xxxxx.deb
$ sudo nano /etc/minarca/minarca-server.conf
# Minarca configuration. # Logging LogLevel=DEBUG LogFile=/valog/minarca/server.log LogAccessFile=/valog/minarca/access.log # Server interface ServerHost=0.0.0.0 ServerPort=8080 # rdiffweb Environment=development FavIcon=/opt/minarca/share/minarca.ico HeaderLogo=/opt/minarca/share/header.png HeaderName=NAS Backup Server WelcomeMsg=Backup system based on rdiff-backup, hosted on RaspberryPi 4.docs](https://gitlab.com/ikus-soft/minarca/-/blob/mastedoc/index.md”>docs)admin DefaultTheme=default # Enable Sqlite DB Authentication. SQLiteDBFile=/etc/minarca/rdw.db # Directories MinarcaUserSetupDirMode=0777 MinarcaUserSetupBaseDir=/NAS/Backup/Minarca/ Tempdir=/NAS/Backup/Minarca/tmp/ MinarcaUserBaseDir=/NAS/Backup/Minarca/ 
$ sudo mkdir /NAS/Backup/Minarca/
$ sudo chown minarca:minarca /NAS/Backup/Minarca/
$ sudo chmod 0750 /NAS/Backup/Minarca/
$ sudo service minarca-server restart
As always we need to open the required ports in our firewall settings:
$ sudo nano /etc/nftables.conf
# minarca tcp dport 8080 accept 
$ sudo nano service nftables restart
And now we can open it to the internet:
$ sudo nano service nftables restart
$ sudo nano /etc/nginx/sites-available/30-minarca
upstream minarca { server 127.0.0.1:8080; keepalive 64; } server { server_name minarca.naspi.webredirect.org; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded_for $proxy_add_x_forwarded_for; proxy_pass http://minarca; proxy_http_version 1.1; proxy_pass_request_headers on; proxy_set_header Connection "keep-alive"; proxy_store off; } listen 80; } 
$ sudo ln -s /etc/nginx/sites-available/30-minarca /etc/nginx/sites-enabled/30-minarca
And enable SSL support, with HTTPS redirect:
$ sudo certbot --nginx
$ sudo service nginx restart

DNS records

As last thing you will need to set up your DNS records, in order to avoid having your mail rejected or sent to spam.

MX record

name: @ value: mail.naspi.webredirect.org TTL (if present): 90 

PTR record

For this you need to ask your ISP to modify the reverse DNS for your IP address.

SPF record

name: @ value: v=spf1 mx ~all TTL (if present): 90 

DKIM record

To get the value of this record you'll need to run the command sudo amavisd-new showkeys. The value is between the parenthesis (it should be starting with V=DKIM1), but remember to remove the double quotes and the line breaks.
name: dkim._domainkey value: V=DKIM1; P= ... TTL (if present): 90 

DMARC record

name: _dmarc value: v=DMARC1; p=none; pct=100; rua=mailto:dmarc@naspi.webredirect.org TTL (if present): 90 

Router ports

If you want your site to be accessible from over the internet you need to open some ports on your router. Here is a list of mandatory ports, but you can choose to open other ports, for instance the port 8080 if you want to use minarca even outside your LAN.

mailserver ports

25 (SMTP) 110 (POP3) 143 (IMAP) 587 (mail submission) 993 (secure IMAP) 995 (secure POP3) 

ssh port

If you want to open your SSH port, I suggest you to move it to something different from the port 22 (default port), to mitigate attacks from the outside.

HTTP/HTTPS ports

80 (HTTP) 443 (HTTPS) 

The end?

And now the server is complete. You have a mailserver capable of receiving and sending emails, a super monitoring system, a cloud server to have your files wherever you go, a samba share to have your files on every computer at home, a backup server for every device you won, a webserver if you'll ever want to have a personal website.
But now you can do whatever you want, add things, tweak settings and so on. Your imagination is your only limit (almost).
EDIT: typos ;)
submitted by Fly7113 to raspberry_pi [link] [comments]

./play.it 2.12: API, GUI and video games

./play.it 2.12: API, GUI and video games

./play.it is a free/libre software that builds native packages for several Linux distributions from DRM-free installers for a collection of commercial games. These packages can then be installed using the standard distribution-provided tools (APT, pacman, emerge, etc.).
A more complete description of ./play.it has already been posted in linux_gaming a couple months ago: ./play.it, an easy way to install commercial games on GNU/Linux
It's already been one year since version 2.11 was released, in January 2019. We will only briefly review the changelog of version 2.12 and focus on the different points of ./play.it that kept us busy during all this time, and of which coding was only a small part.

What’s new with 2.12?

Though not the focus of this article, it would be a pity not to present all the added features of this brand new version. ;)
Compared to the usual updates, 2.12 is a major one, especially since for two years, we slowed down the addition of new features. Some patches took dust since the end of 2018 before finally be integrated in this update!
The list of changes for this 2.12 release can be found on our forge. Here is a full copy for convenience:

Development migration

History

As many free/libre projects, ./play.it development started on some random sector of a creaking hard drive, and unsurprisingly, a whole part of its history (everything predating version 1.13.15 released on Mars 30th, 2016) disappeared into the limbs because some unwise operation destroyed the only copy of the repository… Lesson learned, what's not shared don't stay long, and so was born the first public Git repository of the project. The easing of collaborative work was only accidentally achieved by this quest for eternity, but wasn't the original motivation for making the repository publicly available.
Following this decision, ./play.it source code has been hosted successively by many shared forge platforms:

Dedicated forge

As development progressed, ./play.it began to increase its need for resources, dividing its code into several repositories to improve the workflow of the different aspects of the projects, adding continuous integration tests and their constraints, etc. A furious desire to understand the nooks and crannies behind a forge platform was the last deciding factor towards hosting a dedicated forge.
So it happened, we deployed a forge platform on a dedicated server, hugely benefiting from the tremendous work achieved by the GitLab's package Debian Maintainers team. In return, we tried to contribute our findings in improving this software packaging.
That was not expected, but this migration happened just a little time before the announcement “Déframasoftisons Internet !” (French article) about the planned end of Framagit.
This dedicated instance used to be hosted on a VPS rented from Digital Ocean until the second half of July 2020, and since then has been moved to another VPS, rented from Hetzner. The specifications are similar, as well as the service, but thanks to this migration our hosting costs have been cut in half. Keeping in mind that this is paid by a single person, so any little donation helps a lot on this front. ;)
To the surprise of our system administrator, this last migration took only a couple hours with no service interruption reported by our users.

Forge access

This new forge can be found at forge.dotslashplay.it. Registrations are open to the public, but we ask you to not abuse this, the main restriction being that we do not wish to host projects unrelated to ./play.it. Of course exceptions are made for our active contributors, who are allowed to host some personal projects there.
So, if you wish to use this forge to host your own work, you first need to make some significant contributions to ./play.it.

API

The collection of supported games growing endlessly, we have started the development of a public API allowing access to lots of information related to ./play.it.
This API, which is not yet stabilized, is simply an interface to a versioned database containing all the ./play.it scripts, handled archives, games installable through the project. Relations are, of course, handled between those items, enabling its use for requests like : « What packages are required on my system to install Cæsar Ⅲ ? » or « What are the free (as in beer) games handled via DOSBox ? ».
Originally developed as support for the new, in-development, Web site (we'll talk about it later on), this API should facilitate the development of tools around ./play.it. For example, it'll be useful for whomever would like to build a complete video game handling software (downloading, installation, starting, etc.) using ./play.it as one of its building bricks.
For those curious about the technical side, it's an API based on Lumeneffectuant that makes requests on a MariaDB database, all self-hosted on a Debian Sid. Not only is the code of the API versioned on our forge, but also the structure and content of the databases, which will allow those who desired it to install a local version easily.

New website

Based on the aforementioned API, a new website is under development and will replace our current website based on DokuWiki.
Indeed, if the lack of database and the plain text files structure of DokuWiki seemed at first attractive, as ./play.it supported only a handful of games (link in French), this feature became more inconvenient as the library of ./play.it supported games grew.
We shall make an in-depth presentation of this website for the 2.13 release of ./play.it, but a public demo of the development version from our forge is already available.
If you feel like providing an helping hand on this task, some priority tasks have been identified to allow opening a new Web site able to replace the current one. And for those interested in technical details, this web Site was developed in PHP using the framework Laravel. The current in-development version is hosted for now on the same Debian Sid than the API.

GUI

A regular comment that is done about the project is that, if the purpose is to make installing games accessible to everyone without technical skills, having to run scripts in the terminal remains somewhat intimidating. Our answer until now has been that while the project itself doesn't aim to providing a graphical interface (KISS principle "Keep it simple, stupid"), still and always), but that it would be relatively easy to, later on, develop a graphical front-end to it.
Well, it happens that is now reality. Around the time of our latest publication, one of our contributors, using the API we just talked about, developed a small prototype that is usable enough to warrant a little shout out. :-)
In practice, it is some small Python 3 code (an HCI completely in POSIX shell is for a later date :-°), using GTK 3 (and still a VTE terminal to display the commands issued, but the user shouldn't have to input anything in it, except perhaps the root password to install some packages). This allowed to verify that, as we used to say, it would be relatively easy, since a script of less than 500 lines of code (written quickly over a week-end) was enough to do the job !
Of course, this graphical interface project stays independent from the main project, and is maintained in a specific repository. It seems interesting to us to promote it in order to ease the use of ./play.it, but this doesn't prevent any other similar projects to be born, for example using a different language or graphical toolkit (we, globally, don't have any particular affinity towards Python or GTK).
The use of this HCI needs three steps : first, a list of available games is displayed, coming directly from our API. You just need to select in the list (optionally using the search bar) the game you want to install. Then it switches to a second display, which list the required files. If several alternatives are available, the user can select the one he wants to use. All those files must be in the same directory, the address bar on the top enabling to select which one to use (click on the open button on the top opens a filesystem navigation window). Once all those files available (if they can be downloaded, the software will do it automatically), you can move ahead to the third step, which is just watching ./play.it do its job :-) Once done, a simple click on the button on the bottom will run the game (even if, from this step, the game is fully integrated on your system as usual, you no longer need this tool to run it).
To download potentially missing files, the HCI will use, depending on what's available on the system, either wget, curl or aria2c (this last one also handling torrents), of which the output will be displayed in the terminal of the third phase, just before running the scripts. For privilege escalation to install packages, sudo will be used preferentially if available (with the option to use a third-party application for password input, if the corresponding environment variable is set, which is more user-friendly), else su will be used.
Of course, any suggestion for an improvement will be received with pleasure.

New games

Of course, such an announcement would not be complete without a list of the games that got added to our collection since the 2.11 release… So here you go:
If your favourite game is not supported by ./play.it yet, you should ask for it in the dedicated tracker on our forge. The only requirement to be a valid request is that there exists a version of the game that is not burdened by DRM.

What’s next?

Our team being inexhaustible, work on the future 2.13 version has already begun…
A few major objectives of this next version are :
If your desired features aren't on this list, don't hesitate to signal it us, in the comments of this news release. ;)

Links

submitted by vv224 to linux_gaming [link] [comments]

Study plan for MS-500: Measured skills + Microsoft Docs + some labs + test exams?

Hi all!
Would like your input on my MS-500 study plan. Decided to do Exam MS-500: Microsoft 365 Security Administration.
I have 2-3 years of M365 experience. Most of the stuff covered in MS-500 I have at least poked around in, some of the stuff I work with daily. Reading the Skills Measured presented nothing I at least knew existed. So, I decided to do a study plan.

My MS-500 Study Plan
  1. Add links to Microsoft Docs to the Measured Skills topics (see example of the poor mans study guide below)
  2. Do some labs on the stuff that are new or needs a refresh
  3. Test exams (Fastlane seems to have a free one)
  4. Take the exam
  5. Pass it 🍻

Any holes in my plan?
One concern I do have is that Docs are updated, but there is perhaps too much information not covered by the exam? (I do not have a lot of time, so my study must be effective but also learn for real life, no brain dumps.)
Thanks! :)


Example on how to add links from the measured skills document

Implement and manage identity and access (30-35%)

Secure Microsoft 365 hybrid environments
submitted by Lefty4444 to Office365 [link] [comments]

After effects crashing on startup

Hey folks I get the below error log when opening after effects. Can't get past it unless I uninstall . any ideas here? Keeps happening

submitted by Lolosdomore to AfterEffects [link] [comments]

Binary options trading  Binary options signals - YouTube COMO ABRIR CUENTA EN BINARY / LAS MEJORES OPCIONES ... Make 10 usd Every 50 Seconds Trading Binary Options 100% ... THE TRUTH ABOUT BINARY OPTIONS - YouTube Binary Options Beat - YouTube

Since 2008, investing and making money online with binary options has become increasingly attractive to investors and individuals who invest in shares, equities, currencies, and commodities. There are only two options in binary trading; hence the use of the term “binary”. It is almost like placing a bet, in that you are wagering that an asset will increase Binary options wiki Q&A. IQ Option is one of the few online brokers that has managed to attract millions of traders from across the globe over a short amount of time. The main reason for this is their innovation and introduction of new features and instruments. One of their latest introduction is digital options trading.. This guide will look at both binary option and digital option. However, binary options has much lower entry requirements, as some brokers allow people to start trading with as low as $10. Disadvantages of Binary Trading Reduced Trading Odds for Sure-Banker Trades. The payouts for binary options trades are drastically reduced when the odds for that trade succeeding are very high. Raceoption Binary Options and CFD Trading Platform When trading CFD as with any financial assets, there is a possibility that you may sustain a partial or total loss of your investment funds when trading. Binary Options is a trading instrument that offers a guaranteed return for a correct prediction about an asset's price direction within a selected timeframe. An Option is part of the derivatives types of assets. This means that their value is intrinsically tied to the value of an underlying asset.

[index] [2579] [58] [4908] [2729] [1353] [425] [1508] [652] [2422] [2538]

Binary options trading Binary options signals - YouTube

💰💲FULL BEGINNER? Join My PERSONAL TRAINING!💴💵 BLW Trading Academy: http://www.blwtradingacademy.com/ Live Trading Signals HERE!🔙💲💹Join My ... binary options trading with the intention of profitably to achieve successfully, and know the binary trading options scam aware, that occurred in various countries in between binary options ... IMPORTANT UPDATE we have dropped latest stable and working strategy on 4th November 2018 check and watch below https://www.youtube.com/watch?v=DH1vM4Ygt2w&fe... This is how I have traded Binary for the past 3 years. Thank you for watching my videos, hit the subscribe button for more content. Check out our members res... Beat the binary options without falling in scam! Price action strategy trading and tutorials. Contact me here: alternateforcoc@gmail.com

https://recttingrinkaaverf.cf