Opendata licensing with blockchain

"Blockchain helps in adding flexibility and security to opendata licensing, motivating more creators to do so and more users to create and re-use." With this message, the post answers - What is opendata? Why is licensing important when we are actually opening things? What is blockchain? And how the above three works together?

Opendata and licensing: Opendata (very similar to open source projects, and thus an analogy for data) is basically the data that is open for anyone to use, re-use, and re-distribute. So why if this all is open, we need licensing? Because of many reasons, listing very few here-

(a) mostly people put a lot of efforts to gather data e.g. in astronomy field. They clean it, and make it usable. Licensing allows giving due credits to them. And most important

(b) licensing helps in understanding the exact re-distribution policies or re-use policies for the specific project. I will get back on this point.

Blockchain: Blockchain is in simple terms a ledger or a database. Now look at the image below - this is how wikipedia works - everyone updates a master copy on a central server.

Blockchain has information recorded not at a central server, but in a distributed form - simply, there is no "master-copy" of this data recorded on this database. What it allows is for many people to write into it, and a community of users can update and amend this database. Now, note that this is no new technology, but combination of trust-enhancing cryptography tools, P2P network (basically a distributed network) - allowing ease of authentication an authorization. The following diagram might make it more clear.

Now there is one more wonderful feature of blockchain, introduced by ethereum, and that is smart contract. What smart contract does it that it make users to stick to "contracts" or regulations/ obligations while using a blockchain platform. So it not only defines the traditional contracts, but also enforce them because it is coded on the blockchain. For example, A can transfer money on a blockchain platform to B or vice versa only if they fulfill the condition of the smart contract of that platform (the condition can be as simple as that the money with A > 0). These contracts might need approval of many or just the two people in transaction, and it also helps storing the information/ records of the transactions (though members remain anonymous).

So getting back to the point that:

Why do we need three things together: blockchain - opendata - licensing? As I mentioned above about open data licensing - there are some licensing methods present. But, the opendata licensing available in the world today are limited due to their generality. What I mean is basically if I have opened my data, let's take a youtube video, under creative commons license, it is very difficult for me to say that these specific 30 seconds are not under open license. Why? (a) The creator should have the liberty to say that because at the end it is her product. (b) Their might be some personal information, or social information valid only in one context of time and space which should be saved.

Now this is where the beauty of blockchain comes: Blockchain allows the smart contracts to be appended for the creator such that she can open parts of it or whole of the creation, whenever and wherever required. Basically, it gives an ultimate power to creator for controlling their own information, and allowing to open their creation at the same time. Not only that, it would prevent from any mis-use of the data. For example, medical data can be opened such that it cannot be used for wide-scale animal testing.

The second big issue in opendata licensing is the lack of consistency across different platforms. For example, the open data licensing in many platforms are filled with so many regulations to even open data, that people give up on sharing their data or share it without following all the regulations. Theresult is that the open data is always so inconsistent across platforms, that many times the user of the open data is lost understanding how to even start using it.

This lack of consistency can again be solved with a more global platform - with clearly articulated contracts for sharing data, agreed upon with consensus (look at consensus in blockchain here) of the creators and users for different types of data. Obviously, this would need specific platforms for different types of projects/ datatypes, but it would be much easier to materialise on a blockchain platform due to hard-coded rules of sharing. If you don't follow them, you just cannot share. Simple.

Getting more technical: Now the idea looks good, but how will it be realised? As any other platform, this would need a front-end and a back-end. This is a basic setting of a decentralized application (Dapp). The Dapps usually work on the same front end like any other application: HTML/ CSS/ JS, while the backend works on Ethereum (Solidity is most common platform for that) and web3js can be used as the link. Basically, the front end helps in updating the variables in the smart contracts by the creator, and the back end runs these smart contracts.

Possible downsides and prevention: A rational thinker would argue here that is it not an issue to give so much of "restriction freedom" to the creator because then we are actually restricting openness? No. We are not. Researchers on open data (refer Janssen and Zuiderwijk's works) have shown that even if people are ready to open up data with simple altruism reasons, they are not doing it because of the fear of misuse or the lack of liberty on the data sharing. So this is, in fact, going to allow more researchers to come up and open data.

Then, is this enough to prevent some of the creators to share too little and still pass as opening data? Of course, no. But, the point of this proposal is to actually make it easier for people to open more data, more flexible. If some people still decide to jump on the bandwagon and actually contribute almost nothing - we might need another smart contract to protect it from happening? ;)

Another downside might be that if the platform to share data has too many regulations or smart contracts to actually open data, some people might just give up too fast to even share. But again, this can always be trial and error, and open for more and more discussions via forums with communities of creator and users. Basically, this is another beauty of smart contract - it can be amended as well based on community consensus.

Further discussions/ notes:

A comment: "Since blockchain has to do with a new medium for exchange (including commodities), and it is a new kind of "universal exchange mean" which can substitute money, the economic aspects of sustainable and human development need to be explicitly articulated"

So blockchain is only a universal exchange mean. It is much more. That is a definition we associate to blockchain because of the hype around the bitcoin as currency. Basically, blockchain is also a very important data-basing tool.

And coming to the economic aspects of blockchain because it will substitute money - It has much more benefits from sustainable and human-development because it is not a currency governed by a central bank or the mood of a crazy guy/gal on the head of IMF/ nation/ private banks. It is something which is governed by actual good trade and actual asset values or transactions between people, making it more decentralised and thus much more sustainable.

So, what is the community values/ interests/ stakes beyond the individual producer which can be facilitated with such projects. Opening data and easing up opening of data with such platforms has multiple society benefits, but most of all - it helps in allowing, literally, anyone to open/ share and contribute data as/ when/ where they wish. Above all, it allows formation of community which helps in motivating each other to work towards a common goal as something which was built with launch of linux.

Original blog here: https://goscommons.github.io/blog/2018/04/05/opendata-licensing-with-blockchain

Responsible Decentralisation

With the onset of decentralized systems from finances to utility, there is one threat to the systems which worries me: the intensification of central agents who would start directing the design of the decentralized systems. Take the example of bitcoin, where the whales (or the people who own a large amount of bitcoin) control the market. Now, these whales are the early investors in bitcoin, and no doubt they should get rewards as per their risks. But, this also demands good regulations to ensure that other small investors can still enter this market, unlike a controlled oligopoly where entry to the market is costliest.

There are other examples in a non-blockchain-non-crypto world, where such decentralization changes to the system have done much more harm than doing good to the system. Some examples of accountability issues and intensifying power of the central authority (from regulators):

Different regulations are needed which can make sure that the regulator/ richer investors don't become another controlling authority of the system. Some of the regulations, which I experimented with in my research thesis (mechanisms to prevent energy trading discrimination) focused on empowering every user equally irrespective of the time they enter and the value they have in the decentralized platform. Some of these regulations which can be directly implemented on the platform are:

Anonymize the data not just by the user identity but by how much assets they have on the platform.

Split the assets of everyone in chunks equivalent to the lowest value of the asset. This is something I call as bid split - if a user owns 80 USD value and the poorest own 0.1 USD, then the first user gets identified as 800 different users and the poorest as one.

Just opposite to the above regulation, there can be bid aggregation - allow a number of small users to come together and appear as one user (there should be a threshold on this). This is similar to the idea of a cooperative.

Allow different types of goods to be traded in exchange.

The peer-to-peer contract would have higher priority than the contracts of the platform. This increases trust in the system (but lobbying can be another consequence here too).

Redefining recommendation algorithms

Majority of the information we get about the world or the choices we make today are based on the web pages we browse: let it be social media, news, blogs, or the e-commerce websites. But the choices we are offered, which are personally recommended as per our history and the similar users, constitute of a very small fraction of the all choices available to every user.

Source: TED talk by Eli Praiser

These recommendations surely help in reducing the efforts of every user to go through millions of choices, but it also limits the user to look beyond the domain of interest user has always chosen. For example, if a user prefers a candidate X in election, and reads only the news in his/her favor, the user would usually be only recommended the news for this candidate (majorly in favor, due to the content based recommendations used). This makes the user stuck in a “bubble” of information or filter bubble, which harms a good informed decision e.g. during voting. This is just one of many instances of implications of bias of information due to recommendations.

Solution to this in today’s world focuses on the user behavior and talks about how user needs to be more aware. The TED talk by Eli Praiser also greatly highlights on what as an user, we can do to prevent the bubble of information. As much as I respect this, I feel there is a need to go beyond it as the information is always going to increase and thus a user will stop having a control over the information exposed — making the bubbles stronger. Thus, there is need of a solution which aims at the algorithms themselves which can try to stop forming the bubble at the first place.

Where today’s algorithms fall and where they win

Lets focus on the particular case of elections as an example. There are two basic types of recommendation algorithms which are used for news recommendations:

  1. Content based: Content based algorithms generates recommendation based on the existing choices of the active user, or the user profile and based on the content of the items

  2. Collaborative filtering: The collaborative filtering algorithms aim at finding the preference or choices of an active user based on the choices of many similar users or the personal earlier choices of the active user

The memory based approach of the collaborative filtering (CF) uses some form of similarity parameters between the users or items like the pearson correlation, cosine similarity, nearest neighbor, etc. The model based algorithms consider the machine learning algorithms to find patterns in user browsing history, etc. There are also many successful hybrid models in field of news recommendations which tap into benefits of both the memory based and model based CF algorithms.

These algorithms have definitely proven to be beneficial for news recommendations, but they have also proven to be incapable of solving the “bubble” of information problem.

Technically breaking down it means that the CF algorithms always end up giving an item or user similarity based recommendation, i.e. there is always only a similarity index discussed. Thus, there is no study for the extent of (dis)similarity required for a user to get him/her out of the “bubble” of information.

Similarly, for content based recommendation algorithms, there is no discussion if a user should be made aware of different topics within a given domain e.g. should (s)he be made aware of different (or same, which is the status quo) candidates in the topic of election, once it is know the user is interested in election news? This demands understanding again the degree of breakdown of any topic relevance to a use.

Solution needed: Make better algorithms architecture

The aim now is to define a framework which would allow the users to get out of their bubble of information, by giving them some dissimilar recommendations. Definition of such a framework can be made by breaking it down into three levels:

  • For understanding the domain or the topic of a given news article, frequency structure would be used. But the addition here would be of a structure or hierarchy among different texts. E.g. The topic “election” is followed by the “name of the candidates” in this hierarchy and preceded by the topic of “politics”. The name or location specific to the article identified as the topic at the given level can be considered as the lowest level in the hierarchy. This demands understanding of the literature for the ontologies, especially hierarchy ontologies in context aware recommendations. This method is also applicable to the CF algorithms, where the definition between users would be again determined as a hierarchy. But as the hierarchy and the similarity even for the users would be based on the content, the definition of the content based hierarchy ontology should be enough. The suitable method here extracts hierarchy from “adjacency of terms or syntactical relationships between 2 terms, which are two properties that yield considerable descriptive power to induce the semantic hierarchy of concepts related to these terms.”

  • Finally, it would be required to define that till which level in the hierarchy the similarity should be maintained. This should be defined to break the bubble of information but also keeping in mind that the interest
    of the user in the topic is maintained. E.g. an user interested in elections should not be recommended with news of latest football match, but can be recommended with information about the different latest policies implemented by government in health-care (because of interest in politics). The traditional similarity based indexes can be used here till the desired level, and then random suggestions or weighted recommendations can be made, which would be constrained by the “extended user preferences”

  • The first two steps demands the understanding of the content of the article, which would start with measuring the weights of the texts in the article to understand the content.

Demonstration with an example article

An inquiry into “fake news” is set to be launched by an influential cross-party committee of MPs within months amid fears the phenomenon is undermining democracy. Executives at Facebook, Google and Twitter are expected to be called into Parliament and grilled on whether they are doing enough to stop the trend. The Commons Culture Committee is discussing launching the inquiry internally and hopes it can begin holding sessions by late spring or early summer. Damian Collins, the Tory chairman of the committee, told the Telegraph he fears “malicious” fake news is especially damaging around elections.

The results on the article as delivered by the same website showed the top most results for FACEBOOK, NEWSPAPERS, US ELECTION, SOCIAL MEDIA, GOOGLE, DONALD TRUMP, TWITTER, FAKE NEWS, INTERNET, in descending order of the tf ∗ idf weights (frequency and the popularity of those words among users throughout all documents). These words can be called as the “keywords” of the article.

Now that the context is extracted for the given article, it is important to be defined that how there are hierarchical relations between these words. There are different semantic relations between words and they can be given different weights (Here the weights were decided by manually analyzing and checking the effect of propagation on different cases for each relation). Considering this, we have relations as shown in Figure below, which shows an example of the type of hierarchy we can set up for news articles.

General hierarchies in news articles (example of politics and sports shown here)

Based on the news article, structures in general, a hierarchy is designed as shown in Figure below.

Domain ontology based on the keywords used in the case article

As can be seen in Figure above, all the levels, irrespective of the domains follow almost similar relations from one level to another. Also, for the level 1, it can be seen that the choices of the users would be pretty solid, e.g. a person completely into sports would not enjoy getting news from politics. As, we descend in these levels, the choices start becoming more flexible for a user, as they are interested in the madeOf relations. For example, a person interested in “elections” might be also interested to know the progress report of current “policies” in health-care, but there might be a choice of Instanceof relation; in this case it is “region” for ‘politics”. For level 3, the people are very flexible in their choices, which have more similarTo relations within the level. For example, a person planing to vote would mostly be interested in understanding the news for all “events” happening in election. But in level 4 (with the Contains relation to level 3), users get affected by their personal choices, e.g. if they are supporting one player in a game, or one candidate in election, they would only like to follow the news for only that player/ candidate. This leads to the bubble of information due to the existing algorithms.

Based on the above discussion, the solution here is to first identify the keywords for a given article, and then put them in a hierarchy domain ontology. Then, based on the entity in level 4 (Donald Trump), the related element by contains relation in level 3 (Voting day) should be chosen. Then, randomly or all the articles with keywords as the entity in the level 3 (Voting day) would be recommended. Also, those with keyword as other entities in level 4 (Hillary Clinton) based on level 3 would be recommended.

In summary, this framework suggests that how the existing content based recommendation algorithm would be able to remove the bias of information, if they use a hierarchical domain ontology as shown in figure above by first, fitting the keywords in this ontology, and then, in the final level instead of respecting the similar domains only, recommend the articles for all or few of the entities at-random.

———

This is my another attempt(rather a thought experiment) to bring the policies designing in the field of coding/ algorithm designing on stage with the case of social media. To experts in the field of algorithms, there might be flaws in discussion here, and I genuinely welcome discussions around it.

A take on the biohacking culture

Technology has always been seen as a tool that is used to make human life easier and more efficient. Though there are many more perspectives to see technology, the core purpose of manifesting a tool or applying scientific knowledge of increasing efficiency has remained the same. To put it in other words, we are creating technologies which are enhancing human tasks, which enable us in return to create even more efficient technologies.

This results in a cyclic development process (feedback loop). In this domain of development of human-technology feedback loop, synthetic biology has emerged from the 1960s; which is at the intersection of biology and engineering and involves various disciplines like genetic engineering, biotechnology, evolutionary biology, etc. It has been defined as “designing and constructing biological modules, biological systems, and biological machines for useful purposes (Nakano, Eckford, & Haraguchi,2013).” With time, this definition has evolved (debated) and also expanded to include the scope of “artificial design and engineering of biological systems and living organisms for purposes of improving applications for industry or biological research (Osbourn, O'Maille, Rosser, & Lindsey, 2012).”

The modern developments of synthetic biology which includes the artificial design of biological systems, combined with the development of electronics have led to the upbringing of a new culture in the world of synthetic biology, called Biohacking or DIY (do-it-yourself) biology. Biohacking is a culture of exploration of biology, where all sorts of people (professionals or non-professional in field of biology, who are just enthusiastic about biology/human body related experiments) get together to explore biology in small labs (mostly non-universities). They aim at the application of the IT hacks into the biological systems, mostly the human bodies (Spencer, 2014). This could mean implanting a NFC (Near Field Communication) chip in your hand to use your hand itself to pay or modelling tumor DNA and performing data analysis using software to understand whether a drug or treatment works or not.

There has been no formal definition of biohacking as it is an upcoming culture in synthetic biology, but there have been various activities categorized as biohacking. Dave Asprey, originally a computer security expert and now (considers himself) a biohacker, gives two approaches to work in this culture where you either work on biology outside of yourself (like make an amoeba glow) or you hack your own biology and gain access and control of body that you usually wouldn’t have access to. This culture of biohacking strives on its openness, where there is no peer review or publications in scientific journals, but knowledge sharing is done via blogs and open discussion forums (Bousted, 2008).

There have been strong proponents and opponents of the culture of biohacking’s existence itself. Some of the proponents of biohacking culture like Ron Shigeta who runs Berkeley Bio-labs (a biohacking site in Berkeley), where many to-be biologists gather to hack around frequently defines biohacking as “a freedom to explore biology, kind of like you would explore good fiction.” He believes that this freedom helps biohackers in following their own curiosity, and get to the bottom of the things they would want to understand. Drew Endy, professor of bioengineering at Stanford, who is also a biohacker, believes the hacking culture in general to be positive which aims at learning by building and studying results. Opponents of biohacking culture have labelled it as “criminal activities” stressing on no regards to ethical considerations in such a culture. They fear the potential safety risks like exposure of lethal microorganisms in environment due to a small experiment mistake (Marsh, 2015).

Biohackers have led to great contributions to medical field because of their entrepreneurial component which the research labs usually miss on. The examples have been development of coffee products by Dave Asprey (founder of Bulletproof) which helps a person to achieve high level performance. Another startup by biohackers has helped developing genetically modified food via concept like gene silencing to prevent food from rotting when exposed to air (Venkatraman, 2013). Also, biohackers have been working in security purposes like identifying and authenticating the user for any transaction. This has the potential to replace reviled passwords (Mimoso, 2015). Another aspect that biohacking has which is different from the professional laboratory setups for synthetic biology is the extent of information availability or openness, which gives it a benefit over these professional laboratory setups. For example, lowcost thermocyclers such as OpenPCR are created to make such technologies available to public and anyone can contribute to its design, making researches cost very cheap. Also, biohacking culture promotes a community learning environment in labs or online, and this helps in faster learnings for researchers.

Thus, completely opposing the field of biohacking due to potential safety risks would disrupt such innovations which contribute to development of humankind. But then, the potential risks which might even lead to a global catastrophe (like biohackers running amok with DNA modifications and developing programmable biological agents that might be used in a biological warfare) if not regulated, remains. So, the question arises that whether biohacking should be in the same state as it is existing today or should it be allowed to exist under firm regulations only? The common answer would be to take the midway stand that there should be a balance of both, but I propose that there needs to be firm regulations in place for any form of biohacking activity to flourish. It should be clear that putting biohacking under strict regulations does not mean banning biohacking, because biohacking have other aspects which exist even after regulations in place like the open community learning culture (which is not possible in professional labs where people are formally equipped with information and do not need such platforms in general). The following discussion over the scope of regulations, consequences and arguments over both a regulated and non-regulated field of biohacking would help in explaining my stand.

Regulations and its benefits

Before diving deeper into the answer over why there should be a regulated environment for biohacking, it should be understood that what defines such an environment in general in the field of synthetic biology. Hoffman et al. (2012) defines the “principles necessary for the effective assessment and oversight of the emerging field of synthetic biology” as precautionary principles, mandatory synthetic bio specific regulations, responsibility towards environment, public and worker safety, corporate accountability and right to information. The regulations have been divided on five various clusters for easier understanding (CDC, 2009; CDC, 2013; OSHA, 2011):

1) Regulatory bodies:

Assignment of a regulatory body: Regulation of the bio-labs is done at both federal and state level such that the government has the “ability to revoke the use of select agents, levy fines in the form of civil money penalties and imprison people who are not registered to possess select agents but do so anyway, or a registered person who transfers a select agent to an unregistered person.”

2) Laboratory and environmental safety:

Regular amendments to definition of the biosafety protocols (defining bio safety levels BSLs) and the lab safety levels based on the level of contamination present during the experiment (in labs). Once these levels are defined, the safety levels have to be implemented in the labs to prevent any form of leakage from the labs. All form of protective equipment should be available in the laboratory

3) Procedural safety:

Defining standard microbiological practices (a) personal: like washing hands, complete prohibition of eating, drinking, and smoking in laboratories, etc. (b) procedural: like minimum splashes or aerosols, decontamination of surfaces and (c) prerequisite checks: checking that all the safety provisions of the lab are working, etc.

4) Personnel safety:

Appropriate training of (trained staff) the performers and volunteers of the experiment and complete tests of their immunizations and infections before experimentations.

5) Information safety:

Clear regulations about which information is sharable (e.g. experiments results) and which information should be kept as hidden or protected (e.g. volunteer’s personal details).

Biohacking itself is a task related to biology which deals with experimentation with living beings and any task on a living organism need to be done with utmost care to prevent any negative consequences to the health of that being. This demands the need of clear regulations like the volunteer and the experimenter safety (personnel safety) regulations to prevent any type of harm to involved people. The biohacking experimentations can be done by the experimenters on themselves and thus there is high chance that the experimenter would not look into their own safety. In such a case, such regulations would help prevent any accidents.

It should be understood that a threat that biohacking has in itself is the inability to “undo” such hacks. “….when a hacker causes the digital reality in their computer to malfunction through tinkering, they can simply reboot and start again. It might not be so simple when hacking biology itself. This may be a flawed analogy, but it is probably something the new socio-ethics of syn-bio should address if serious mis-steps are to be avoided” (Maynard, 2008). In such an environment, if there is no button (regulation) to control the start of the process in case the hack is risky, the results can be devastating for not just one, but maybe many lives. The regulations like procedural safety checks like prerequisite checks (wearing gloves before the experiments, decontaminating surfaces after experiments) and laboratory safety level checks help prevent such mistakes.

As this culture has just emerged in recent decade, there has not been enough awareness till the jurisdiction level and due to the informal and dispersed research structure, there haven’t been any regulation possible to be set up yet. Also it has been questioned very well that “How do you establish a framework for socially and ethically responsible development when the person you need to reach is an adolescent teenager constructing new biological code in their basement” (Maynard, 2008)? In spite of these practical issues, this is the exact reason why there is an urgent need to start organizing the research setup and thus put up a formal process in place in this domain of research. A teenager if accidentally creates a biological replication code while working with harmful bacteria, who are we going to hold responsible for the catastrophe caused? These regulations like allocating regulatory bodies like governments, independent organizations, etc., clearly give this form of responsibilities to specific bodies preventing such issues. For example, a state government or a locally formed community organization overlooking the specific community activity can take up this responsibility. But this would only be possible in case there is some form of regulation structure which can allocate such a community run body in the first place.

On a bigger level (like national), these regulations cannot completely eliminate the threats of bioterrorism, or the threat of accessing someone’s body data without their consent (like implantation of RFID/ Radio frequency identification tag in a person to study their location, etc.), but these regulations can surely help in reducing them, for e.g. due to the fear of arrests or fines there might be a significant amount of control possible on such activities.

These regulations also help in standardizing (introduction of standard rules) and thus giving a stable form to the biohacking culture in itself. This homogeneity over the biohacking experiments helps ease and complement researchers on similar fields. Also, introducing regulations makes this field less risk-prone and thus young-minds start looking towards this like a more professional career, which increases their involvement and thus, contributions to this field. This in turn leads to breakthroughs, finally attracting more students. This is a strong feedback loop which only a regulated environment can ensure. Also, a regulated environment develops more trust for the customers of biohacking firms.

The existence of the research environment also emphasizes the need of the formal education for conducting such researches where a single mistake could take lives. Such decisions can only be responsibly handled by formally trained people. A formal education is based on a framework and questions the very requirement of a particular bio-hack making the subsequent decisions well grounded.

Also one of the argument against the regulation related to formal training, which is usually made by the DIY biologists have been that the formal education is not always required for learning things and anyone who is willing to learn can participate in exploring biology. It is right to argue that there have been exceptional minds who were dropouts and still turned out to deliver world’s greatest innovations. But, majority of the innovations and development in science come from formally educated students. Above all, the formal education gives credibility about what is being done has the least chances of going wrong (because the education system trains you so).

Questioning these benefits of a controlled research environment, one of the most pressing critic has been about the existence of regulations itself which are claimed to prohibit the freedom to work as per your wish. Research in itself prospers due to the freedom it gets, and thus it is ironical to ask for regulations in the field of research. But, we should understand that freedom if exercised equally by a bioterrorist and a person who is passionate about helping humankind through biohacking can lead to two completely different consequences. So, if one cannot identify who amongst us is the former, the system of regulation to control the freedom and minimize the risk of a bioterror attack needs to be in place. Also, it should be kept in mind that such regulations over research have the focus to enable or prosper the legit approach which would help humankind, rather than just restricting someone.

Resistance to regulations

The proponent of the biohacking freedom culture (or rather the opponents of the regulations in biohacking) are strongly resistant on introduction of any form of regulations to this biohacking environment. This is usually because they are pure preachers of freedom. What they look as the benefit of biohacking is not the benefit of the culture itself but rather the characteristic property of freedom which feels more comfortable than any forms of restrictions, and also prevents them from fast and immediate results (where freedom has more chances to deliver faster results).

Usually there is a reason given by the proponents of biohacking by giving an analogy of how well a garage-culture (culture of experimenting without anyone’s consent, usually done in garages and thus the name) like in Silicon Valley helped development of some of the best innovations in today’s world like Google and Apple. This free-spirited innovation culture assures crazy and at the same time unexpected innovations, some of which turns to be really helpful and breakthroughs for humankind. Thus the claim has been that limiting the progress of biotechnology in hackers’ community with regulations may slow down such innovations’ pace and thus may impact the development of humankind. But this is an out-of-place analogy, because if we look into the history of any form of life saving drugs or biological/medical innovations, we won’t be able to find a single innovation which came from such a culture. The reason is not because the biohacking culture didn’t exist back then, but the reason is that such innovations require a form of physical environment and professional experience which biohacking (garage)

culture lacks.

Also, the partially in-principle claims by these opponents of the regulations have been pointing to freedom too, for example as stated by Drew Endy (professor at Stanford and a biohacker), this hacking culture helps in learning something that you are curious about and implement them. This claim suggests that because of no form of professionalism or regulations, it is easier to learn and implement something you are curious about, which you wouldn’t have done in a regulated professional environment. This again talks about the benefits of the freedom and not the benefits or advantage of this environment to biohacking itself. So none of the claims help in showing that only such unregulated environments of biohacking can help get the benefits from the biohacking environment, and a regulated environment cannot achieve the same. Most importantly the myth/fear in mind of the opponents of the regulations is that regulations will completely shut down the biohacking culture need to be cleared because the benefits of biohacking can still be exercised, for the development of humankind, even in the presence of these regulations in place.

Conclusion (is it?)

The advancements in synthetic biology related to the artificial design of biology for industrial and medical purposes have helped to born a new culture of “biohacking” which combines entrepreneurship with synthetic biology and works on the principle of no form of regulations or controls and complete openness of research. Biohacking has huge potentials for enhancing human life by modifying the genes and curing illness, ease of gathering data over different types of treatments, and producing genetically modified foods, making humans more resistant to various bacterial illness, etc. Except for the direct medical benefits, there are various security benefits from biohacking’s entrepreneurial activities too. But, the openness of this culture possess

the safety risks, and to prevent the catastrophic potential consequences of non-regulation, there needs to be a strict regulation in place before any further activities take place in biohacking. As this regulation structure cannot be applied to just a few who might use biohacking as a weapon like in bioterrorism, this formal structure needs to be present for all and there cannot be a relaxation for such a system.

It can be clearly seen from the essay that there are various benefits of the regulations from the synthetic biology, if applied to biohacking in five main clusters (safety regulations): procedural, laboratory and environmental, personnel, regulatory body and informational safety. There are inherent issues in biohacking which makes it risky without regulations and then there are also the issues which are caused because people proposing the biohacking culture are adamant on any form of regulation introduced. The latter case usually happens due to the firm belief of these proponents towards freedom only and thus overshadowing the benefits of biohacking, which can be realized even in a regulated environment. Above all, it should be kept in mind that the regulations in place do not stop the benefits of biohacking like the promotion of a community culture to share information (for e.g. stack overflow platform, where coders help solve bugs), which is still a part of biohacking that remains even in a regulated environment. Thus there is no need to ban biohacking culture in itself, but regulate its aspects which exercises complete freedom in EVERY form. Thus, to conclude biohacking needs to have an urgent shift towards a regulated environment before any form of activities are promoted any more in this field.

About the boom of tech start-ups in developing countries - Does it means development for them?

Almost every youngster studying in tech college has an idea about which s/he thinks to be grown as a business. There are a number of reasons why everyone has a unique idea and there is 99% chance of that becoming a successful business today in a developing country.

1) The major reason for the boom is the increasing dependency of everyone on online services. Today after a tiring day, everyone expects to lay down at home and have ready-made food available in their hands. Weekends feel good when you do not need to go to a busy shopping mall, but rather the services come at your hand.  So, it is really clear that what type of services are coming as start up and why they are so in demand too.

2) Whenever amazon offers you a better offer than Flipkart, you obviously chose to buy the same product from amazon. So, this is how more competitors rise in the same market. From my college, I saw 4 apps which are built in the same year to just provide online food delivery services. All of them are getting good venture support because they are investing in same customers via various deals which the other one misses. So it is just not about the product, it is also about the customer service and marketing strategy. which has broadened today.

3)  The customer segment has broadened a lot since the online services have made services cheaper. Even a lower-middle-class family, just running on the budget can look forward to buying daily necessities from say, snapdeal. Also, this has made the producers of the product diversify their products as per different societies. For example, HUL makes the same product in the different package to facilitate different needs.

4) The developing countries like India have always managed to make the best of the situation by the method named as "Jugad" which in normal words can be said as rudimentary innovation which are necessary for making things work in lack of best available facilities. The ideas like reverse innovation in medical technology are one of its consequences. This helps in low-cost technologies with same and maybe, even better results. This has helped many small scale enterprises to rise in rural areas or urban poor areas and develop as renowned projects throughout the country.

The bigger question -  Does it help the bigger scenario i.e. the development of the country, as a whole? This question struck me as I had to take some decisions about my life myself. So I stumbled upon some interesting articles and studies. Here are some conclusions based on them and different work experiences -

1) When I worked as a summer intern at one of the biggest FMCG firm of world in India, I realised strikingly that the technical expertise hired was used only for management of projects and implementing something locally which was already carried out by someone sitting in a developed country. This was not innovation and not worth the skills of the students who were brightest from the best technical colleges. Luckily, I had the chance to do something completely innovative and I have always been thankful to the firm for that.

I think this is exactly what is happening in the trending online startups of India. We are copying something prevailing in western countries and setting them up locally. It is good for the consumers, and yes, it helps in improving employment but it doesn't mean we have innovated something new and thus it prevents us from being leaders. I have seen that investors are not interested when you claim that you are the only one in the market selling such a service or product. This discourages the startups more.

This makes us maintain a gap that is always going to be there between a developing country and a developed country. This gives us no hope of overtaking developed countries in future, until we innovate and sell OUR OWN products and services

2)    Developing countries need a support from the base of the pyramid, first. Surveys reveal that we have a huge population struggling below poverty line (21-29%) and much more below livable living conditions. This is sad and it needs support. When only 19.19% of Indians are able to access the internet in India, it doesn't make sense if we are just catering to it.   We need to cater to more pressing causes. Like, poverty and education first.  Some are really doing great work in this - like Avanti Learning Centres  and their efforts are worth appreciation. But more of such organisations need to raise and support the ground causes.

3) The online startups, except for some like social cops,  are just providing luxury to a privileged class. Don't take me wrong, I support them and I know eve my life would have been not so awesome and easier except for them. But, I guess we have more causes to support than mere luxury options, say cheap medical services. Better apps for public transport like M-indicator are equally required and efforts are required in that direction.

Summarising, the growth in startup helps us build our economy more and promises the rise of leaders in the country. But, it is still not the way it is required for the best way for development.

So, if you are thinking about starting up, give a thought to all of this once. Just once. :)