Bookafy Security and Privacy

Use Bookafy with confidence and join over 15,000 business who trust Bookafy around the world. 

Connected Calendars

When connected to a 3rd party application (icloud, google cal, outlook, exchange) Bookafy only imports the calendar subject line, the date, time and duration in order to block the time in Bookafy to prevent double bookings. We do not import, store or keep any personal or identifiable information. 


Email and Contacts

Bookafy does not access any information within your connected calendar or email account including contacts, email address or emails. Email addresses can be used to authenticate account ownership within Bookafy, but we do not collect any information related to your personal data. 



All 3rd party integrations are done via Oath authentication. This allows Bookafy to connect with 3rd party providers without seeing, collecting or storing your user names or passwords. Bookafy is connected via an authentication code that is provided as you connect via Oath. 


Data Hosting


Bookafy is hosted on Azure. You can read about Azure and AWS’s thorough security provisions on their site.

Bookafy leverages all of the platform’s built-in security, privacy and redundancy features. Azure continually monitors its data centers for risk and undergoes assessments to ensure compliance with industry standards. Azure’s data center operations have been accredited under: ISO 27001, SOC 1 and SOC 2/SSAE 16/ISAE 3402 (Previously SAS 70 Type II), PCI Level 1, FISMA Moderate and Sarbanes-Oxley (SOX).



Bookafy utilizes AWS CDN for images. Bookafy leverages all of the platform’s built-in security, privacy and redundancy features. AWS continually monitors its data centers for risk and undergoes assessments to ensure compliance with industry standards. AW’s data center operations have been accredited under: ISO 27001, SOC 1 and SOC 2/SSAE 16/ISAE 3402 (Previously SAS 70 Type II), PCI Level 1, FISMA Moderate and Sarbanes-Oxley (SOX).



Bookafy runs back up of all data and code base daily on redundant servers in 2 separate geographies. As well, code and data backups are hosted on Dropbox Cloud Storage. 



Data that passes through Bookafy is encrypted, both in transit and at rest. All connections from the browser to the Bookafy platform are encrypted in transit using TLS SHA-256 with RSA Encryption. Bookafy requires HTTPS for all services.

For sensitive data where the original values are not needed, such as our own passwords, we hash the data using the BCrypt algorithm. Where the original values are needed, such as authentication details for accessing calendars, the values are encrypted using the AES-256-GCM algorithm using a unique, randomly generated salt for each set of sensitive data.

Secure transfer to servers

Bookafy has employed a data security service to authenticate data transfers between our development team and the virtual machines. All data is encrypted and secured. 

Data Sharing and Third Party Access

Bookafy doesn’t sell customer data to anyone. We do not share data for cross channel marketing purposes. Bookafy does not grant access to any 3rd party provider unless through account connection via Oath authentication or API key. Both can be disconnected at anytime from within Bookafy or from within 3rd party application. Otherwise, there are no 3rd parties that are given data, sold data or have data shared for any reason. 


Background Checks

All Bookafy employees go through a thorough background check before hire.


While we retain a minimal amount of customer data and limit internal access on a need-to-know basis, all employees are trained on security and data handling to ensure that they uphold our strict commitment to the privacy and security of your data.


All employees have signed a non-disclosure agreement and confidentiality agreement before hiring.


Data Access

Only authorized employees are granted access to our production infrastructure and the use of password managers to ensure strong passwords and two-factor authorization when available is mandated across the company.



We have business continuity and disaster recovery plans in place that replicate our database and back up the data onto multiple cloud servers in different geographies and data centers to ensure high availability in the event of a disaster.


Bookafy has uptime history of 99.3%

Development Cycles

New Features

Bookafy develops new features in 3 week sprints. Our deployments begin on a development server, then staging, then to live. Live server deployment occurs on Sunday morning PST. 

QA and Testing

Bookafy runs automated testing along with manual testing before each deployment. 

Dev and Staging Server QA

Before Bookafy is released on live servers, the code is deployed on staging and development servers during the QA process. Once the testing is complete, the code is added to a repository for live server deployment on the sprint cycle timeline. 

Live Monitoring

Once the code is released to our production server, our QA team runs automated tests, manual test and utilizes external software to monitor our services. The external software is running 24/7 with alerts that are automatically sent to our development team with any issues. These alerts are monitored 24/7 and are sent via text message and email to our team. 



Bookafy is hosted on Azure servers and is utilizing Azures Next Generation Firewall Service, which sits behind Azures Web Application Gateway service. This service includes protection against things, such as SQL Injections or malformed HTTP requests.

Malware and Virus Prevention

All of our employees are working from company owned machines that are running anti-malware and virus protection software. Our office server is protected by a firewall for external penetration protection. 


Our internal server, employee machines and data hosting continuously run vulnerability scanning software. 

Application Security

Login credential protection

For our external applications that work with Bookafy, Bookafy does not store/collect passwords. All Bookafy authentication is using a secure Oath connection to grant access to Bookafy with a secure token used for each individual user’s account. Examples include: Zoom, Stripe,, Google calendar, Exchange, Office365,, Icloud, mailchimp and more. All 3rd part


When an account is cancelled or downgraded to free, all Oath connections are automatically disconnected from Bookafy to your third party applications. 

API Access

All access to data via Bookafy is explicitly approved through an OAuth authorization mechanism which grants access tokens that can be revoked at any time.



We have incorporated GDPR standards into data practices to make sure our all of our customers are supported and in compliance with GDPR. Learn more about Bookafy GDPR.

Security FAQ 

Have any significant security breaches or incidents occurred in the past 5 years?


Are privileged and generic account access tightly controlled and reviewed on a periodic basis, at least



What data is being collected about the user? 

The software collects data about the account owner and the end customer. Both sets up data are based on the data provided by the account owner, and the end customer. We do not collect any data outside of the volunteered data given by the end user or account owner. 

The account owner can create text fields to collect different data points at booking, but the customer is fully aware of thed ata being collected as the end user would be typing the data into the fields. The end user or account owner can request a data deletion at anytime by emailing 

For what purposes is the app using the data?

The app data is only used for the transaction the end customer has signed up for. If using a third party app to login with SSO, like Facebook or Google, we only use that connection for account access. We do not use data from the account like contacts, events, email messages and we do not post on your behalf or participate in any activity within your account. It is only used for login. 

What are the users rights for data deletion and how can the user request to have the data deleted? 

We use the data collected from the account owner (our customer) to deliver a better experience to the account owner. This includes any place the AO has been stuck, visited many times, might have questions on or a bug. We use this data to communicate with our customers (Account owner) at the right time with the right message. 

For the end user, our customer’s customer… we only use this data for sake of the transaction (booking) with the account owner. We do not market to these customers, or use their data in any other way. Their data is not sold or borrowed… it stays in our system. 

In both the case of the AO and end customer, their data can be deleted at any time. The AO can email and request for their account and data to be removed. The end customer can request their data to be removed by the AO. 

Are shared user accounts prohibited for employees? What about Clients?

Employees have their own dedicated accounts. Clients also have their own dedicated accounts, with access to their data only.

Does your password construction require multiple strength requirements, i.e. strong passwords and utilizes a random sequence of alpha, numeric and special characters?

We require a minimum 6 characters in passwords on the basic password management level. OWASP and NIST SP 800-63-3 password policy options may be available in the coming year.

Is the network boundary protected with a firewall with ingress and egress filtering?

Yes. All firewalls and load balancing facilities are provided by Azure and Amazon AWS.

Are public facing servers in a well-defined De-Militarized Zone (DMZ)?

Yes, this is inherited from Azure’s default infrastructure zoning and Bookafy has regional servers spread throughout the world. 

Is internal network segmentation used to further isolate sensitive production resources such as PCI data?

PCI data is not stored as it is only framed by Bookafy from 3rd party providers such as Stripe and Bookafy doesnt collect data or store data. 

Is network Intrusion Detection or Prevention implemented and monitored?

A broad spectrum of monitoring tools, supplemented by notifications and alerts provided by Azure remain constantly on. This includes intrusion detection and email confirmations of network access.

Are all desktops protected using regularly updated virus, worm, spyware and malicious code software?


Are servers protected using industry hardening practices? Are the practices documented?

Security services are utilized regularly to provide system security audits. 

Is there active vendor patch management for all operating systems, network devices and applications?

Yes. This is provided by Azure automatically via their service.

Are all production system errors and security events recorded and preserved?

Logs are preserved for a minimum of 1 month, with some remaining up to 6 months, depending on severity and action required.

Are security events and log data regularly reviewed?

Yes. Logs are reviewed daily, weekly and monthly – depending upon the nature of the log events.

Is there a documented privacy program in place with safeguards to ensure protection of client confidential information?


Is there a process in place to notify clients if any privacy breach occurs?


Do you store, process, transmit (i.e. “handle”) Personnally Idenfiable Information (PII)?


In what country or countries is PII stored?

Most of our PII data is stored in the US. However, we are able to store account data for our enterprise customers in a specific regional center. Example. Australian organization could elect to have their data stored in our Canberra Azure location. Or European countries can store in European data center. 

Are system logs protected from alteration and destruction?

This is provided by Azure and backed up on Dropbox Cloud Storage.

Are boundary and VLAN points of entry protected by intrusion protection and detection devices that provide alerts when under attack?

Yes. These services are included in our Azure firewall which protects against intrusion and sends automated alerts to our development team. 

Are logs and events correlated with a tool providing warnings of an attack in progress?

Yes, our security service includes logging and alerts of attacks in real time. 

How is data segregated from other clients within the solution, including networking, front-ends, back-end storage and backups?

Every client account is logically separated from other clients, through the use of a required persistent tenant identifier on all database records.

Additionally all application code requires this tenant identifier for all operations – both read and write. An automated testing regime is also in place to protect code changes from regressions and possible cross-tenant data contamination.

The tenant identifier is “hard linked” to every user account and logically enforced through fixed “WHERE” clauses on database queries and equivalent measures for file access. A platform user is not able to change or otherwise unlink their session or account from this tenant identifier. Thus there is no logical possibility of a user having login authorization under a different tenant identifier. Even if they tried to access pages using a different tenant’s ID, the system would reject the request due to the user account not being registered to the requested tenant ID.

Do you have an Incident Response Plan?

Yes, a “living document” is maintained which outlines disaster and incident response

checklists, contact details and key system facilities for understanding and responding to incidents.

What level of network protection is implemented?

We utilize Azures Web Application gateway (load balancer) and Next Generation Firewall to protect our network of virtual machines running on Azure Cloud. 

Does the platform provide reports for Quality of Service (QOS) performance measurements (resource utilization, throughput, availability etc)?

Such metrics are not provided to clients, aside from availability and response timings as per our status page on

Is the disaster recovery program tested at least annually?

Yes, recovery checks and performed and tested annually.

What is the Recovery Time Objective (RTO) and Recovery Point Objective (RPO) of the system?

The RTO is 4 hours, with RPO being 1 hour.

Do you provide backup and restore plans for individual clients?

All aspects are multi-tenanted, so backups are taken across entire client base. Complete file backups are run every 24 hours and benefit from Azure database point in time backups taken every 5 minutes. Backups are stored on Dropbox Cloud as well as redundant Azure virtual machines. 

What is the maximum time that backups are retained?

Database point-in-time backups are retained for 30 days, with general file backups for 90 days minimum.

What is the expected turnaround time for a data restore?

Any client restore in any non-disaster scenario must be requested and scheduled with us. Turnaround is between 1 and 2 business days. 

Can a single entity account be restored without impacting the entire platform?

If restoration of a specific record or artifact is required by a client, this can be performed online via a per request basis and is chargeable work. There is no impact on the platform or client account.

Is High Availability provided – i. e. where one server instance becomes unavailable does another become available?

Multiple server instances are running at all system tiers through Azure’s virtual machine, with Web Application Gateway handling load balancing. Failure of a server instance within the data center is handled by Azure WAG, with the problem instance recycled and/or removed and replaced with a new instance.

Is data stored and available in another location (data center) to meet disaster recovery requirements?

Yes. All data is replicated to a second data center which differs by geographic location as well as having backup data stored on Dropbox Cloud Storage. .

Is the failover process an active/active, automated switchover process?

Failure of a server instance within the primary data center is handled by Azure WAG load balancers, with the problem instance recycle and/or removed and replaced with a new instance.

In the event that the entire data center were to have a critical failure, then switchover to the secondary center is a manual process, as we need to perform a full assessment of the issue first to ensure there is no simple workarounds to keep the existing primary center presence available. If it is determined that a move to the secondary center is required, then switchover will be initiated manually to meet the target recovery objectives.