Showing posts with label News. Show all posts
Showing posts with label News. Show all posts

PUBG Mobile Vikendi Snow Map Release Date and Start Time Announced

December 18, 2018

HIGHLIGHTS

  • Vikendi is a part of the 0.10.0 update
  • It is the fourth map in the game following Erangel, Miramar, and Sanhok
  • It is playable from 5:30am IST on December 21

The PUBG Mobile 0.10.0 brought support for the Vikendi map and now Tencent has confirmed when you can play it. According to a tweet from the official PUBG Mobile account, the PUBG Mobile Vikendi Snow Map release date is December 20 and the start time and date is 5:30am IST on December 21 when it would be available for matchmaking. Earlier today, administrators of the official PUBG Mobile Discord stated that there was no definite timeline for when Vikendi could be played. Evidently this has changed. Before you can get the Vikendi map though, you'll need to download the PUBG Mobile 0.10.0 update that has a 2.1GB download size.

What this means is, PUBG Mobile could be the first version of PUBG to get Vikendi outside of PUBG public test servers, where it's playable for PC users. The PUBG Vikendi snow map is now live in PUBG PC PTS or Public Test Server as it is known. So if you bought PUBG on PC, you can check out Vikendi before its available in the main game. While no date has been given for Vikendi for PUBG Xbox One and PS4, it's safe to say that it would be available soon considering it was added to the PUBG PS4 PTS.

Other additions to PUBG PTS include the G64 rifle and a snowmobile. It's speculated that PUBG Vikendi PTS for PS4 and Xbox One will be live in early January and its full release will bring the Vikendi Event Pass along with it. Though we won't be surprised to see it hit earlier in order for PUBG to stay competitive with the likes of Fortnite and Overwatch, both of which have winter-themed events underway.

As for PUBG Mobile, Vikendi isn't the only thing new. With the PUBG Mobile 0.10.0 update comes a reporting system allowing players to report suspicious behaviour while spectating a match on death. Cross-server matchmaking has been added too. When enabled players have a chance to be matched with those of the same tier on other servers. Also new in the PUBG Mobile 0.10. update is the Firearms Finish Upgrade System. This lets players upgrade weapon finishes to get new kill effects, broadcasts, and death crate appearances.

VAR at the World Cup: What is the technology being used in Russia and how does it work?

June 19, 2018
The World Cup will put the greatest footballing nations on Earth to the test. But there is another trial happening, perhaps just as important and even more controversial: that of VAR, or the video assistant referee.

The technology is being used at the World Cup for the first ever time, and has the potential to fundamentally change games. It could decide the future of the tournament, by reversing some of the most important refereeing decisions in the game.

Proponents claim that VAR will ensure that decisions are fair and that the best team wins. But even those supporters admit that the technology is still at a very early stage – with supporters and referees still apparently confused about how it should actually be used.

How will VAR work in the Premier League?


Despite that complexity, the technology is fundamentally incredibly simple: it is an extra referee who watches the game and advises officials on decisions. In practise, though, it might be very complicated indeed.

How does it work?

There are 13 officials who can be chosen as the video assistant referee. They will all sit in a special hub in Moscow – no matter where the game is happening – and they will do so wearing their full kit, as if they were ready to jump onto the pitch at any time.

Of those, one will be chosen for each game, and they will have a team of three assistants.
In there, they will receive a stream from inside the stadium, which is made up of the view from a whole host of cameras – including slow motion ones – which the referees can flick between.

The VAR will watch the whole of each game. If they see something wrong, they can flag it to the referee; if the referee thinks something is wrong, he can get in touch with the VAR.

Either way, the VAR is only advisory. Any decision ultimately rests with the referee, even if he has advised the opposite way by the VAR.


A general view of the Video Assistant Referee's Room home of the VAR system to be used at all FIFA World Cup matches during the Official Opening of the International Broadcast Centre on June 9, 2018 in Moscow, Russia (Laurence Griffiths/Getty Images)

What can be referred to the video referee?

Fifa might have allowed the technology into the World Cup. But they have severely limited the kinds of decisions it can actually be used for.

In total, there are four different sorts of incident that can be reviewed:
  • Goals. The system can be used to check if a goal actually went in, in the obvious way. But it can also adjudicate on the lead-up to the goal, not just the ball passing into the net – if an infringement would have stopped the goal being rewarded, then VAR can stop it being awarded.
  • Penalties. This can go either way, being used to check whether a penalty should have been awarded and wasn't, but also reversing the decision if a foul is given in the penalty box.
  • Red cards. If the referee has decided a foul has been committed, then VAR can be used to decide whether a red card should be awarded. This might be the most controversial thing that the video technology will be relied on for, for reasons we will get onto later.
  • Mistaken identity. Probably the most vague but also important parts of VAR's responsibility, this will allow the additional referees to spot if the wrong player has been disciplined. If they are, the referee will be corrected. That should stop situations like the mix-up between Kieran Gibbs and Alex Oxlade-Chamberlain that saw the wrong player sent off during a match in 2014.


How do you know when it's happening?

The entire system of VAR is focused around the referees, not around spectators. Which means the priority is not on making clear when or whether the system is being used.

It can be engaged in one of three ways.

In some situations, the referee might simply get a message in their earpiece, indicating that a decision is being reviewed. They'll get word from the VAR referees, who might tell them to change or stick with a decision. Spectators might not even know this is happening, or just see the referee touch their earpiece.

Another sees the video process engaged more formally, and the referee will draw a rectangle in the air to indicate a TV. That can be triggered by either the referee or the VAR judges, who will again have a word through the earpiece. The decision will be relayed to the referee, who will then make the rectangle sign again and carry on with the match.

The final one is the most clear, but could be the most confusing and frustrating in the stadium. Referees will head over to a small reviewing station on the side of the pitch – making the TV sign as they do – where they will also be able to see the same replays that are being shown to the VAR officials. They'll then consult together and make their decision.

In all cases, the decision will be made clear in the normal way – by the traditional referee signalling the decision, in the same way as without VAR. They might make the rectangle TV sign in the air to indicate how the decision was made, but then will continue in the usual way.

What can spectators see?

Perhaps the strangest and most confusing part of VAR is the fact that spectators won't actually get to see any of the replays, or even necessarily know what is happening. At most, they'll see the referee make the TV screen sign and perhaps head off to watch the pitch-side review.

But viewers at home will get to see the same pictures the referees are being shown, so the decision should not be quite so shocking. (This only goes one way: the referees don't get to see any broadcast images, or hear any commentary.)

Does it make any difference to players?

In the more direct sense – that is, discounting any arguments about whether it will change the pace of games – VAR doesn't allow players to do anything specific. In fact, the only significant rule change is what players can't do: they must not make the VAR sign themselves, in the same way they can't pretend to hold up a yellow card to someone as part of a protest, and they can be booked if they try it.

Why is it so controversial?
It took a long time for VAR to be introduced. And that was partly because many people fear it could ruin the flow and feel of the game.

Critics suggest that referees flagging up decisions using VAR – and then taking time to review footage and make their decision – could cause disruptions in play. And they also suggest that it will take away the important nuance that is part of refereeing, butting into matches to decide on any incident that relies on shades of grey.

Proponents have dismissed that idea. Earlier this year, refereeing body PGMOL stressed that the system would only interrupt games when there were very clear problems – "the rule of thumb is essentially 'if it’s not clear and obvious, leave it', and 'minimum interference, maximum benefit'," The Independent's Miguel Delaney wrote at the time.

But in use, the technology has been far from clear.

At a friendly match between Italy and England just weeks ago, a bizarre decision saw the video referee award a penalty – but that fact came after minutes of unexplained contemplation, and was not very well communicated to spectators. That came after similar events at Tottenham, which saw Spurs have two games disallowed during an FA Cup game, but after a lengthy disruption to play.

Officials might now be more used to using the technology, and working together. But we won't know until a World Cup match is interrupted.

Are there any changes being made for the World Cup?

The previous confusing instances have led to some changes for this competition. Fifa will be able to use a special tablet to send information to spectators and broadcasters, which should hopefully give them a bit more of a sense of what is actually happening while decisions are being made.

Is it likely to make a difference to decisions?

A study released this week found that slow-motion videos and real-time ones mostly led to the same decisions: in the experiment, referees were 63 per cent right when they watched an incident slowed down, compared with 61 per cent at normal speed.

But it found that slowing down videos seemed to severely change the way that referees saw intention. Watching in slow-motion made them far more likely to think that a foul had been done on purpose – and therefore considerably more likely to give a red card.

By
Andrew Griffin 
Jun 16, 2018

Google says its AI is better at predicting death than hospitals

June 19, 2018

Google’s Medical Brain team is now training its AI to predict the death risk among hospital patients — and its early results show it has slightly higher accuracy than a hospital’s own warning system.

Bloomberg describes the healthcare potential of the Medical Brain’s findings, including its ability to use previously unusable information in order to reach its predictions. The AI, once fed this data, made predictions about the likelihood of death, discharge, and readmission.

In a paper published in Nature in May, from Google’s team, it says of its predictive algorithm:

These models outperformed traditional, clinically-used predictive models in all cases. We believe that this approach can be used to create accurate and scalable predictions for a variety of clinical scenarios.

In one major case study in the findings, Google applied its algorithm to a patient with metastatic breast cancer. 24 hours after she was admitted, Google gave her a 19.9 percent chance of dying in the hospital, in contrast with the 9.3 estimate with the hospital’s augmented Early Warning Score. Less than 2 weeks later, the patient died from her condition.

In order to come to that number, the AI tallied 175,639 data points from the patient’s electronic medical records, including handwritten notes. According to the paper, this is the difference between Google’s work and previous deep learning approaches:

In general, prior work has focused on a subset of features available in the EHR, rather than on all data available in an EHR, which includes clinical free-text notes, as well as large amounts of structured and semi-structured data.

In its whole study, Google analyzed 216,221 hospitalizations with 114,003 patients — and over 46 billion data points from all of their electronic health records.

This isn’t the first time Google’s AI has been applied to predictive healthcare. Earlier this year, DeepMind partnered with the Department of Veterans Affairs to feed its AI 700,000 medical records from veterans in order to predict deadly changes in patient condition.

The company is also working to develop a voice recognition system for clinical notes which will eliminate the need for doctors to type them in. In that particular case, the challenge comes from inaccuracy — even the smallest mistakes in a patient’s record can result in them getting the wrong care. Dr Steven Lin, who spearheaded the research with Google, told CNBC:

This is even more of a complicated, hard problem than we originally thought. But if solved, it can potentially unshackle physicians from EHRs and bring providers back to the joys of medicine: actually interacting with patients.

If Google can both smooth the process of entering data and improve the means by which that data is used, it could cut down on human error in medical care.

The company’s greatest challenge — the data in this case isn’t available for security reasons. In 2016, the company faced backlash from patients when it was revealed it’d gained access to the data of 1.6 million patients — without consent — from three hospitals in London in order to develop an app which notified doctors when a patient was likely to get kidney disease.

It could also stoke fears of an AI having too much say over who gets what care. If a patient is given a significantly higher risk than another, will the hospital allocate more resources to the former based on the AI’s prediction?

By
Rachel Kaser
Jun 19, 2018

“Learn with Google AI” website offers free machine learning education for all

June 17, 2018
Google introduces “Learn with Google AI” website to educate people about machine learning and AI for free

Artificial Intelligence (AI) and machine learning (ML) are currently some of the trending topics in the tech industry. Google wants to make AI and ML more accessible to more people by providing lessons, tutorials and hands-on exercises at all experience levels.

Therefore, Google India on Thursday (March 1) introduced a new website called “Learn with Google AI” that encourages everyone to understand how AI works, learn about core ML concepts, develop skills and apply AI to solve real-world challenging problems. These educational resources are developed by ML experts at the company and caters to everyone, from beginners to researchers looking for advanced tutorials.

“We believe it’s important that the development of AI reflects as diverse a range of human perspectives and needs as possible. So, Google AI is making it easier for everyone to learn ML by providing a huge range of free, in-depth educational content,” Zuri Kemp, Programme Manager for Google’s machine learning education, said in a statement.

“This is for everyone — from deep ML experts looking for advanced developer tutorials and materials, to curious people who are ready to try to learn what ML is in the first place,” Kemp added.

“Learn with Google AI” also offers a free online course called the new Machine Learning Crash Course (MLCC), which features videos from ML experts at Google, interactive visualizations illustrating ML concepts, coding exercises using cutting-edge TensorFlow (TF) APIs, and A focus that teaches how practitioners implement ML in the real world.

“Our engineering education team originally developed this fast-paced, practical introduction to machine learning fundamentals for Googlers. So far, more than 18,000 Googlers have enrolled in MLCC, applying lessons from the course to enhance camera calibration for Daydream devices, build virtual reality for Google Earth, and improve streaming quality at YouTube. MLCC’s success at Google inspired us to make it available to everyone,” added Kemp.

The course’s duration is estimated at 15 hours, with interactive lessons, lectures from Google researchers, and over 40 exercises included. The course can be availed by newcomers as well as those who have no experience in ML. However, Google suggests that students should have proficiency at least in intro-level algebra, programming basics, and Python.

“There’s more to come from Learn with Google AI, including additional courses and documentation. We’re excited to help everyone learn more about AI,” said Kemp.

By Kavita Iyer
March 4, 2018

Google bans AI for weapon use

June 09, 2018

Google has promised not to use AI for weapons, following protests over its partnership with the US military.

A decision to provide machine-learning tools to analyse drone footage caused some employees to resign.

Google told employees last week it would not renew its contract with the US Department of Defense when it expires next year.

It has now said it will not use AI for technology that causes injury to people.

The new guidelines for AI use were outlined in a blog post from chief executive Sundar Pichai.

He said the firm would not design AI for:
  • technologies that cause or are likely to cause overall harm
  • weapons or other technologies whose principal purpose is to cause or directly facilitate injury to people
  • technology that gathers or uses information for surveillance violating internationally accepted norms
  • technologies whose purpose contravenes widely accepted principles of international law and human rights

He also laid out seven more principles which he said would guide the design of AI systems in future:
  • AI should be socially beneficial
  • It should avoid creating or reinforcing bias
  • Be built and tested for safety
  • Be accountable
  • Incorporate privacy design principles
  • Uphold high standards of scientific excellence
  • Be made available for use

When Google revealed that it had signed a contract to share its AI technology with the Pentagon, a number of employees resigned and thousands of others signed a protest petition.

Project Maven involves using machine learning to distinguish people and objects in drone videos.

The Electronic Frontier Foundation welcomed the change of heart, calling it a "big win for ethical AI principles".

8 June 2018
By BBC NEWS
Image result for bbc logo

GitHub’s new CEO promises to save Atom post-Microsoft acquisition

June 09, 2018

Earlier this week, Microsoft announced the acquisition of GitHub for $7.5 billion, and the installation of Xamarin co-founder Nat Friedman as the social coding platform’s new CEO.

It goes without saying that this wasn’t entirely welcomed by the community, particularly by those who remember Microsoft’s antitrust days of the 1990’s.

One specific area of concern is what Microsft would do with GitHub’s beloved Atom text editor.

Developers are worried that Microsoft could pull the plug on Atom, as it directly competes with Visual Studio (VS) Code, and both editors have an awful lot in common. They’re both cross-platform and based on the Electron framework, for example.

Fortunately, GitHub has no plans to discontinue Atom, and intends to continue development on the popular text editor. As Friedman explained in a recent AMA:

Developers are really particular about their setup, and choosing an editor is one of the most personal decisions a developer makes. Languages change, jobs change, you often get a new computer or upgrade your OS, but you usually pick an editor and grow with it for years. The last thing I would want to do is take that decision away from Atom users. 
Atom is a fantastic editor with a healthy community, adoring fans, excellent design, and a promising foray into real-time collaboration. At Microsoft, we already use every editor from Atom to VS Code to Sublime to Vim, and we want developers to use any editor they prefer with GitHub. 
So we will continue to develop and support both Atom and VS Code going forward.

He’s not wrong. Developers are extremely fiercely passionate about their setups, and both Visual Studio Code and Atom have their share of evangelistic users. If Microsoft made any big changes here, it’d undo much of the developer goodwill it’s garnered during Satya Nadella’s tenure as CEO.

Friedman also pointed out that Visual Studio Code and Atom both share a lot of history.

Both are based on Electron, as mentioned, but Atom also uses Microsoft’s Language Server protocol. There are also rumblings that Atom could adopt the Debug Adapter protocol, which would allow common debugger support between editors. He also suggested that both editors could support compatible real-time editing in the near future:

We’re excited about the recent developments in real-time collaboration, and I expect Atom Teletype and VS Code Live Share to coordinate on protocols so that eventually developers using either editor can edit the same files together in real-time.

You can read Friedman’s AMA here. It’s actually pretty interesting, and if you’ve been following the acquisition news this week, it’s worth checking out. You’ll notice that, as he did with his open letter, he spends a lot of effort reassuring people that the day-to-day operations of GitHub won’t change after the acquisition.

Friedman also takes pains to prove his developer credentials, extensively talking about how he got his start in free software, his love of the Emacs text editor, and how he made his first commit to GitHub in 2009.

Will that be enough to reassure GitHub’s more jittery users, however? That remains to be seen.


Jun 06, 2018 in DESIGN & DEV
by MATTHEW HUGHES

Zayn Malik May Perform in Nepal!

June 08, 2018

The former One Direction singer ZAYN MALIK may be performing in Nepal! The British singer who is known for his hits Pillowtalk and the recent Let Me will be in India for a mini-tour in August. Malik who has spoken about Bollywood and covered some Kailash Kher classics is slated to perform in Mumbai, Kolkota, Hyderabad and Delhi. The team behind bringing Zayn to India, JPR EVENTS have stated that they are “planning to organise his concert in Nepal”. If ZAYN does end up performing in Nepal, I am sure the 1D fans and fans of ZAYN will go wild over him! Of course, the concert is bound to attract many non-fans as well since it’s still rare to have global popstars performing in the country. JPR Events have previously organised Bryan Adams and Kailash Kher (2011) concerts in Nepal. The event company works across India, Nepal, Pakistan and Bangladesh.

Would you go to see ZAYN in concert?

June 3, 2018
By lexlimbu

Microsoft And GitHub: Why Pay With Stock?

June 08, 2018
Image result for microsoft and github
Summary

  • The GitHub transaction looks expensive to many shareholders, but this is not a deal motivated by financial reasons.
  • Strategic in nature, GitHub furthers Microsoft's move into open-source software used to support its software development community.
  • The decision to pay for the deal in common stock is unusual. Microsoft could have written a $7.5 billion check for GitHub ten times over without blinking an eye.
  • To me, this looks like a potential means to keep important GitHub employees invested in future Microsoft success. The founders will be the largest individual stockholders behind Bill Gates.

On Monday, Microsoft (MSFT) announced it had reached an agreement to acquire GitHub, a collaborative software development platform. Widely speculated as being in the pipeline, shareholders were likely taken aback by the $7.5 billion price tag – nearly four times what the company was valued at in a July 2015 secondary funding round. Despite very explicit statements from both Microsoft CEO Satya Nadella and outgoing GitHub CEO and co-founder Chris Wanstrath that GitHub would remain independent and retain its “ethos,” GitHub users appear very cautious on the intrusion by the computing giant into what they viewed as a place safe from the pillaging hands of corporations. Retention of repositories is a major concern many have, with alternatives like GitLab and Bitbucket ready with arms open to take in any developers that flee.

The move is a predictable extension of Microsoft’s ongoing shift into open-source software used to support its software development community. It is yet another deal that highlights the continued growth of importance of cloud-based software and the Internet of Things within the technology space. Microsoft shareholders, for the most part, already understand that GitHub is not being acquired because of its financial value (the profit it can, or will, generate).

Instead, it is being bought for its platform, with Microsoft hopefully able to lure GitHub users deeper into the paid Microsoft developer environment in a tactful way that does not upset the base. Rather than financial value, this is more about strategy. Investors can draw parallels to the YouTube acquisition by Google (NASDAQ:GOOG) (GOOGL), a deal that has never made Google a dollar of profit but is widely viewed as a major success.

What I found most interesting was the decision to pay for the deal in common stock. While this is, technically, an all-stock transaction, the deal will essentially be cash funded. Microsoft is a rampant purchaser of its own shares, spending $8.359 billion gross, or nearly $2.8 billion per quarter, thus far in its fiscal 2018. There is more than $30 billion remaining on its recently re-upped buyback authorization.

In the deal announcement, Microsoft even announced an acceleration of its planned buybacks ahead of its normally quarterly run rate – dilution from the GitHub deal is expected to be fully offset within six months. The company certainly isn’t hurting for money: Microsoft held $132 billion in cash and cash equivalents on its balance sheet at the most recent quarterly close.

So why not buy out GitHub with cash? There are, after all, costs and distractions associated with running and increasing the share buyback program. It would be easier just to cut a check. GitHub was already looking for a new CEO to replace Chris Wanstrath and venture capitalists would assuredly be looking for a check – this doesn’t look to be a decision made from their end. Perhaps the answer lies in keeping core GitHub employees invested in Microsoft so they do not head for the hills after transaction close.

The deal will make billionaires out of founders PJ Hyett, Chris Wanstrath, and Tom Preston-Werner, with these three controlling roughly half of GitHub today. As this is all stock, it will make them the largest individual holders behind founder Bill Gates and well ahead of current CEO Satya Nadella. Granular details of the GitHub sale are not public, so there might be restrictions on their ability to sell shares and when.

Publicly announced, Wanstrath will be staying on as a Technical Fellow, assisting with strategic software initiatives. The role of the other two, if any, is unknown. I could understand reticence to publicly announce any relationship with Preston-Werner, who had to resign from the CEO role in 2014 following sexual harassment allegations (despite GitHub finding no evidence to support the claims).
Even to me, this seems like a speculative stretch, but I’ve got no other good reasons to point to. I’m not aware of any tax benefits from taking this approach, nor do I think that this is Microsoft speculating its stock will be cheaper to buy back a quarter or two down the line. Hopefully, investors learn more with time on the motivations behind the structure of this acquisition.

As an aside, Industrial Insights, my Marketplace service, is set to raise prices beginning in July to $69 month/$599 annually. Outperformance has been material since inception: 17% of return over the S&P 500. Beyond deep dive financial analysis, the unique value I provide versus peers - tours of company facilities with management, in-person conference coverage, and more - carries a cost. Rate hikes are a necessity, but early members receive the rate they signed up at for the life of their subscription. I've also turned on two week free trials so potential members can try before they buy - no out of pocket obligation is needed for that opportunity. For those interested, now is the time to lock in lower rates.

Disclosure: I/we have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours.

Michael Boyd wrote this article and it expresses his opinions. He said that he haven't receive compensation for it (other than from Seeking Alpha). He have no business relationship with any company whose stock is mentioned in this article.


Jun 07, 2018
About: Microsoft Corporation (MSFT)
Michael Boyd

Intel Plans to Break Drone Light Show Record with Over 1,500 Drones Flown at 50th Anniversary Celebrations

May 01, 2018





Intel Shooting Star drones form the Olympic rings as part of the Olympic Winter Games PyeongChang 2018 opening ceremony drone light show. (Credit: Intel Corporation)

What’s New: Intel plans to break its world record title for most drones flown simultaneously with more than 1,500 drones flown as part of the company’s 50th-anniversary events this summer.

“Intel has been advancing technology for 50 years. To celebrate that fact and showcase our ongoing innovation, we’re looking to break another drone light show record with our Intel Shooting Star drones and related technology.”
– Anil Nanduri, vice president and general manager, the Intel drone team

How It Works: The current record of 1,218 Intel® Shooting Star™ drones was set earlier this year. The new show featuring more than 1,500 drones is planned for this summer and will be a live one-time public show at an Intel site for employees and their families.

The Intel Shooting Star drones are unmanned aerial vehicles (UAVs) specifically designed for entertainment purposes, equipped with LED lights that can create countless color combinations and easily be programmed for any animation. The fleet of drones is controlled by one pilot.

Intel’s Goals: The technology employed in our drone light shows can be applied to other applications, including search and rescue, where multiple drones can look for a lost hiker or commercial applications for large infrastructure inspections that reduce inspection time and improve efficiency.

As we look forward, the notion of flying lights and being able to use drones indoors – including in stadiums and theaters, and other indoor venues where GPS signals for positioning are not available – led us to develop new capabilities to fly a fleet of drones inside.

At Intel, we will continue to push the boundaries of drone technology, accelerating the adoption of commercial drone use for business transformation and proliferating this new, innovative form of entertainment.

Intel at AUVSI Xponential: Intel will announce the latest on its commercial drone innovations at AUVSI Xponential on May 1. Witness the latest in Intel drone technology and learn about the newest features and capabilities of both Intel’s drone hardware and software solutions at booth #628 and in the Outdoor Unmanned Experience.



roshankhapung



Thanks you for reading my blog! The reason why I created my own blog was to have the freedom to write about anything intrested in.





Recent

recentposts

Random

randomposts